Purpose: Invalidity Analysis


Patent: US9635134B2
Filed: 2012-07-03
Issued: 2017-04-25
Patent Holder: (Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC
Inventor(s): Junwei Cao, Yuxin Wan

Title: Resource management in a cloud computing environment

Abstract: Technologies and implementations for managing cloud resources.




Disclaimer: The promise of Apex Standards Pseudo Claim Charting (PCC) [ Request Form ] is not to replace expert opinion but to provide due diligence and transparency prior to high precision charting. PCC conducts aggressive mapping (based on Broadest Reasonable, Ordinary or Customary Interpretation and Multilingual Translation) between a target patent's claim elements and other documents (potential technical standard specification or prior arts in the same or across different jurisdictions), therefore allowing for a top-down, apriori evaluation, with which, stakeholders can assess standard essentiality (potential strengths) or invalidity (potential weaknesses) quickly and effectively before making complex, high-value decisions. PCC is designed to relieve initial burden of proof via an exhaustive listing of contextual semantic mapping as potential building blocks towards a litigation-ready work product. Stakeholders may then use the mapping to modify upon shortlisted PCC or identify other relevant materials in order to formulate strategy and achieve further purposes.

Click on references to view corresponding claim charts.


Non-Patent Literature        WIPO Prior Art        EP Prior Art        US Prior Art        CN Prior Art        JP Prior Art        KR Prior Art       
 
  Independent Claim

GroundReferenceOwner of the ReferenceTitleSemantic MappingBasisAnticipationChallenged Claims
123456789101112131415161718
1

COMPUTER. 28 (4): 14-22 APR 1995

(Adler, 1995)
Coordinated Computing Technologies, Medford, MA, USADISTRIBUTED COORDINATION MODELS FOR CLIENT-SERVER COMPUTING maximum capacity individual client

alternate cloud, alternate cloud resources multiple service

alternate cloud resources include one client access

XXXXXX
2

2008 IEEE NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, VOLS 1 AND 2. : 363-370 2008

(Mehta, 2008)
IBM India Research LaboratoryReCon: A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers cloud resources, cloud computing environment virtual machines, physical server

consumption rate, memory consumption rate power cost

XXXXXXXXXXXXXX
3

2000 IEEE SERVICE PORTABILITY AND VIRTUAL CUSTOMER ENVIRONMENTS. : 20-28 2001

(Mao, 2001)
University of California, BerkeleyAchieving Service Portability In ICEBERG alternate cloud end devices

access rates end user

XXXXXX
4

ICS 09: PROCEEDINGS OF THE 2009 ACM SIGARCH INTERNATIONAL CONFERENCE ON SUPERCOMPUTING. : 225-234 2009

(Liu, 2009)
IBM Thomas J. Watson Research CenterVirtualization Polling Engine (VPE): Using Dedicated CPU Cores To Accelerate I/O Virtualization cloud computing environment Virtual machine

Internet resources direct user

CPU consumption rate CPU core

XXXXXX
5

2007 10TH IFIP/IEEE INTERNATIONAL SYMPOSIUM ON INTEGRATED NETWORK MANAGEMENT (IM 2009), VOLS 1 AND 2. : 139-148 2007

(Steinder, 2007)
International Business Machines CorporationServer Virtualization In Autonomic Management Of Heterogeneous Workloads cloud resources, cloud computing environment virtual machines

consumption rate, memory consumption rate virtual server

XXXXXXXXXXXXXX
6

COMPUTER COMMUNICATIONS. 29 (9): 1271-1283 Sp. Iss. SI MAY 31 2006

(Wang, 2006)
Agency for Science, Technology and Research, Singapore (A*STAR)Design And Development Of Ethernet-based Storage Area Network Protocol memory consumption rate network protocol

memory usage storage area

XXXXXX
7

MIDDLEWARE 2006, PROCEEDINGS. 4290: 342-362 2006

(Gupta, 2006)
University of California, San DiegoEnforcing Performance Isolation Across Virtual Machines In Xen cloud computing resource manager allocating resources

cloud resources, cloud computing environment virtual machines

second resource Virtual machines

XXXXXXXXXXXXXXXXXX
8

2006 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, VOLS 1 AND 2. : 373-381 2006

(Khanna, 2006)
Purdue UniversityApplication Performance Management In Virtualized Server Environments alternate cloud resources, community cloud physical server

second resource management scheme key performance

XXXXXXX
9

Twenty-Second IEEE/Thirteenth NASA Goddard Conference On Mass Storage Systems And Technologies, Proceedings. : 118-127 2005

(Banikazemi, 2005)
IBM ResearchStorage-based Intrusion Detection For Storage Area Networks (SANs) cloud resources storage system

cloud computing, public cloud time copy, N storage

XXXXXXXXXXXXXXXXXX
10

ADVANCES IN GRID COMPUTING - EGC 2005. 3470: 786-795 2005

(Cai, 2005)
The University of LancasterThe Gridkit Distributed Resource Management Framework cloud computing, memory usage resource requirements

hybrid cloud resource availability

XXXXXXXXXXXXXXX
11

JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING. 64 (9): 1069-1085 SEP 2004

(He, 2004)
Tennessee Technological University (Tennessee Tech), The University of Rhode Island (URI)STICS: SCSI-to-IP Cache For Storage Area Networks Internet resources network services

processor usage tracking processing unit

memory usage storage area

XXXXXXXXXXXX
12

2004 12TH IEEE INTERNATIONAL CONFERENCE ON NETWORKS, VOLS 1 AND 2 , PROCEEDINGS. : 48-52 2004

(Wang, 2004)
The Data Storage Institute (DSI) SingaporeDesign And Development Of Ethernet-based Storage Area Network Protocol memory consumption rate network protocol

memory usage storage area

XXXXXX
13

NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT. 534 (1-2): 29-32 NOV 21 2004

(Van Wezel, 2004)
Forschungszentrum Karlsruhe, Institute for Scientific Computing, P.O. Box 36 40, 76021 Karlsruhe, GermanyFirst Experiences With Large SAN Storage And Linux cloud resources storage system

memory usage storage area

XXXXXXXXXXXXXXXX
14

IBM SYSTEMS JOURNAL. 42 (1): 29-37 2003

(Jann, 2003)
International Business Machines Corporation, IBM Server GroupDynamic Reconfiguration: Basic Building Blocks For Autonomic Computing On IBM PSeries Servers replacement scheme overall performance

memory usage load demand

XXXXXXXXXXXXX
15

USENIX ASSOCIATION PROCEEDINGS OF THE FAST 02 CONFERENCE ON FILE AND STORAGE TECHNOLOGIES. : 189-201 2002

(Anderson, 2002)
Hewlett Packard LabsSelecting RAID Levels For Disk Arrays cloud resources storage system

I/O access rate logical unit

XXXXXXXXXXXXXX
16

CN102402458A

(A·贾亚莫汉, 2012)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
具有非对称处理器核的系统上的虚拟机和/或多级调度支持 cloud computing, cloud computing resource manager 一种计算设备

processor usage, memory usage 的使用

access rates 可用的

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the claimed limitations wherein providing a tenant with user access to the generated data collection…

discloses that the system is implemented on such network for example SAN…

discloses the claimed computer program product and apparatus for reconciling billing measures to cost factors the…

teaches as follows A virtual machine can be migrated from one host computer to another utilizing LUN logic unit…
XXXXXXXXXXXXX
17

US20110302321A1

(Mark Vange, 2011)
(Original Assignee) Circadence Corp     

(Current Assignee)
Sons Of Innovation LLC
Data redirection system and method therefor cloud resources, cloud computing environment network resource, remote server

Internet resources network services

first resource first request

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses a system whereby a gateway provides clients access to the…

discloses dispatching requests including a quality of service context…

teaches a distributed data processing system for executing database operations comprising a a first group including…

teaches wherein evaluating the operation using the alternate evaluator is faster than evaluating the operation using a…
XXXXXXXXXXXXXX
18

CN102156665A

(余日泰, 2011)
(Original Assignee) 杭州电子科技大学     一种虚拟化系统竞争资源差异化服务方法 CPU consumption rate 高速缓存

processor usage, memory usage 使用情况, 的使用

XXXXXXXXX
19

CN102082692A

(叶川, 2011)
(Original Assignee) 华为技术有限公司     基于网络数据流向的虚拟机迁移方法、设备和集群系统 Internet resources 网络数据

change region size 大小确

XXXXXX
20

CN102457504A

(巫妍, 2012)
(Original Assignee) ZTE Corp     

(Current Assignee)
ZTE Corp
应用商店系统及使用该应用商店系统进行应用开发的方法 processor usage, memory usage 一种使用, 的使用

Internet resources 一种资源

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses associating a room telephone with a hotel guest each time the guest stays at one or more hotels in a hotel…

discloses a SIP discrimination process capable of determining whether the SIP message is destined for a SIP user or a…

teaches the system method and computer readable medium of claims…

teaches categorizing the account according to one of a plurality of behavioral groups wherein the reference pattern…
XXXXXXXXXXXX
21

WO2011025720A1

(William A. Moyes, 2011)
(Original Assignee) Advanced Micro Devices, Inc.     Optimized thread scheduling via hardware performance monitoring memory usage, memory usage tracking said determination

cloud computing, cloud computing resource manager shared resources

first resource first thread

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches that when a plurality of hosts are presented that are capable of performing a task the plurality of hosts is…

teaches maintaining affinity between thread groups and processing instances…

discloses the operator information comprises at least status information of each operator…

teaches a computerreadable medium carrying one or more sequences of instructions which when executed by one or more…
XXXXXXXXXXXXX
22

US20110289555A1

(Greg DeKoenigsberg, 2011)
(Original Assignee) Red Hat Inc     

(Current Assignee)
Red Hat Inc
Mechanism for Utilization of Virtual Machines by a Community Cloud cloud resources, cloud computing environment virtual machines, cloud services

access rates end user

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the claimed limitations identifying one or more facets of the plurality associated with the identified one or…

discloses the claimed computer program product and apparatus for reconciling billing measures to cost factors the…

discloses a switch element receiving necessary profile information which is used to configure the switch from an…

teaches methods for allowing multiple tenants to independently access and use a plurality of virtual computer…
XXXXXXXXXXXXXX
23

CN102238204A

(郑志昊, 2011)
(Original Assignee) Tencent Technology Shenzhen Co Ltd     

(Current Assignee)
Tencent Technology Shenzhen Co Ltd
网络数据的获取方法和系统 processor usage tracking 包括控制

community cloud 在网络

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses marking data packets that are configured to be transmitted via the second local wireless network communication…

discloses the prior art is replete with communication and entertainment systems that provide information in different…

teaches the portable gateway has multiple interfaces paragraph…

teaches wherein upon a pause of the VOD content an advertisement is displayed…
XXXXXX
24

CN102379107A

(K·F·多普勒, 2012)
(Original Assignee) Nokia Oyj     

(Current Assignee)
Nokia Technologies Oy
用于提供对设备到设备的通信可用性的指示的方法、装置和计算机程序产品 processor usage tracking 计算机程序产品

cloud computing, cloud computing resource manager 一种计算

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches a communication interface between a peripheral comprising the intelligent network interface card INIC…

teaches the invention as claimed including the method of claim…

teaches the switch further comprising the ports interconnection between the hosts and the peripheral device operating…

discloses the following limitations A method for implementing enterprise applicationbased micro blogging as in claim…
XXXXXXXXXXXX
25

CN102395938A

(卡尔·爱德华·马丁·萨科, 2012)
(Original Assignee) American Power Conversion Corp     

(Current Assignee)
Schneider Electric IT Corp
电源和数据中心控制 memory usage 少一个数据

cloud computing, cloud computing resource manager 一种计算

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches further comprising at least one of an operating system operating system…

teaches wherein the operating parameters and the limiting parameters includes network bandwidth processor consumption…

teaches a management entity that tracks utilization of the server resources based on the workload and controls the…

teaches accelerating the CPU s frequency to higher frequency in response to detecting a command requiring…
XXXXXXXXXXXX
26

CN102163072A

(J·J·宋, 2011)
(Original Assignee) 英特尔公司     用于节能的基于软件的线程重映射 processor usage tracking 计算机程序产品

cloud computing, cloud computing resource manager 一种计算

hybrid cloud 上下文切

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches a management entity that tracks utilization of the server resources based on the workload and controls the…

teaches a threshold taking into account worst case conditions for voltage and leakage power for devices…

teaches accelerating the CPU s frequency to higher frequency in response to detecting a command requiring…

discloses a method of processing waveform data from a device under test DUT comprising the steps of providing a test and…
XXXXXXXXXXXXXXX
27

US20110055838A1

(William A. Moyes, 2011)
(Original Assignee) Advanced Micro Devices Inc     

(Current Assignee)
Advanced Micro Devices Inc
Optimized thread scheduling via hardware performance monitoring memory usage, memory usage tracking said determination

cloud computing, cloud computing resource manager shared resources

first resource first thread

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches that when a plurality of hosts are presented that are capable of performing a task the plurality of hosts is…

teaches maintaining affinity between thread groups and processing instances…

discloses the operator information comprises at least status information of each operator…

teaches a computerreadable medium carrying one or more sequences of instructions which when executed by one or more…
XXXXXXXXXXXXX
28

CA2720087A1

(Jin-Gen Wang, 2009)
(Original Assignee) Level 3 Communications LLC     

(Current Assignee)
Level 3 Communications LLC
Content delivery in a network first resource first request

maximum capacity one range

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches that the personal identifier of a verified or authorized user includes an identifier of a domain that includes…

teaches a method of generating a playlist based on an input seed such as a song name or artist name see abstract and…

teaches the query can be selected using a leastrecently used algorithm see…

discloses an invention for delivering targeted advertising to mobile devices and teaches that used and expired…
XXXX
29

CN101541048A

(司源, 2009)
(Original Assignee) 华为技术有限公司     服务质量控制方法和网络设备 consumption rate, I/O access rate 比特速率

memory usage, memory usage tracking 单元判

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses all the limitations of the claim but fails to teach the conversion of the classification information into…

teaches that the synchronization destination utilizes synchronization commands from said mobile communication device…

teaches a wireless communication apparatus capable of using a plurality of different wireless communication systems…

discloses a method for handing off users equipment between different access systems wherein one of the first and the…
XXXXXX
30

CN101978677A

(A·多尔加诺, 2011)
(Original Assignee) 阿尔卡特朗讯公司     带内应用认知传播的增强 memory usage 少一个数据

therein instructions 指令进行

community cloud 在网络

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses that the scheduling information includes only the amount of interference on the reverse link…

discloses wherein the transmission parameter is a maximum emission power value see section…

discloses a system that allows mobile communication subscribers to send and receive multimedia messages as shown in…

teaches wherein determining that bandwidth associated with the stream is available comprises determining if tokens are…
XXXXXXXXX
31

WO2010089626A1

(Ake Arvidsson, 2010)
(Original Assignee) Telefonaktiebolaget L M Ericsson (Publ)     Hybrid program balancing I/O access rate processing elements

cloud resources, cloud computing environment virtual machines

processor usage load data

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches using pre determined criteria to verify a matching target computing system…

teaches wherein the step of selectively controlling the task to migrate to the second cluster comprises referring to…

discloses the first system and the second sytem are configured such that while of the first system or the second system…

teaches the invention substantially as claimed including a computerreadable nontransitory medium storing a management…
XXXXXXXXXXXXXXXX
32

CN101730150A

(周娜, 2010)
(Original Assignee) 中兴通讯股份有限公司     业务流迁移时对网络资源进行控制的方法 first resource management scheme, second resource management scheme 资源释放

community cloud 在网络

XXXXXXXX
33

US20090182868A1

(Marlin Popeye McFate, 2009)
(Original Assignee) Circadence Corp     

(Current Assignee)
Sons Of Innovation LLC
Automated network infrastructure test and diagnostic system and method therefor cloud resources, processor usage network performance

maximum capacity bandwidth usage

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses dispatching requests including a quality of service context…

discloses a system whereby a gateway provides clients access to the…

teaches a distributed data processing system for executing database operations comprising a a first group including…

discloses that the retrieved security association will be updated would reads on modifying the retrieved security…
XXXXXXXXXXXXXXXXXX
34

CN101404624A

(S·L·彼得森, 2009)
(Original Assignee) 音乐会技术公司     对媒体项目的下载进行优先级排序的系统和方法 consumption rate, CPU consumption rate 数据速率

Internet resources 网络数据

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the method further comprising requesting the sender to indicate a priority level of the first message ie…

discloses cascading said rst sub lter and at least one remainder sub lter to create at least part of said ensemble lter…

discloses that the system of assigning the priority level for each groups subject or schedule which relates to the…

teaches the method wherein operating in the automatic response mode further comprises…
XXXXXX
35

US20100162261A1

(Laksmikantha Hosahally Shashidhara, 2010)
(Original Assignee) PES INST OF Tech     

(Current Assignee)
PES INST OF Tech
Method and System for Load Balancing in a Distributed Computer System Internet resources available resource

hybrid cloud, cloud computing environment job request

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches using pre determined criteria to verify a matching target computing system…

teaches wherein the step of selectively controlling the task to migrate to the second cluster comprises referring to…

discloses the first system and the second sytem are configured such that while of the first system or the second system…

teaches the invention substantially as claimed including a computerreadable nontransitory medium storing a management…
XXXXXX
36

WO2008142705A2

(Hosahally Lakshmikantha Shashidhara, 2008)
(Original Assignee) Pes Institute Of Technology     A method and system for load balancing in a distributed computer system Internet resources available resource

hybrid cloud, cloud computing environment job request

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches using pre determined criteria to verify a matching target computing system…

teaches wherein the step of selectively controlling the task to migrate to the second cluster comprises referring to…

discloses the first system and the second sytem are configured such that while of the first system or the second system…

teaches the invention substantially as claimed including a computerreadable nontransitory medium storing a management…
XXXXXX
37

CN101663647A

(N·纳加拉杰, 2010)
(Original Assignee) 高通股份有限公司     决定是在本地启动应用还是远程启动应用作为webapp的装置 maximum capacity 处理能力

processor usage, memory usage 的使用

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches an action list specifying a set of selectable actions select object which represents underlying action such as…

teaches associating said web service output operation with a set of output actions verify accuracy of response to web…

discloses wherein the atomic instruction is one of an interlocked increment instruction and an interlocked decrement…

teaches the invention substantially as claimed including a distributed system providing scalable methodology for real…
XXXXXXXXX
38

US20090235265A1

(Christopher J. DAWSON, 2009)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
ServiceNow Inc
Method and system for cost avoidance in virtualized computing environments cloud resources, cloud computing environment network resource

processor usage memory resource

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches a transaction counter corresponding to an actual account…

teaches the claimed invention wherein the functions are substantially the same as the apparatus of claim…

discloses A browser for requesting and viewing external web applications pages on the local user interface of the MFD…

discloses wherein the predefined criteria includes a determination that the second device does not have a security…
XXXXXXXXXXXXXXXX
39

CN101889264A

(天宇·里·达莫雷, 2010)
(Original Assignee) 高通股份有限公司     可配置系统事件和资源仲裁管理的设备和方法 processor usage tracking 计算机程序产品

cloud computing, cloud computing resource manager 一种计算

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses wherein providing the configuration data to the first mobile application includes the second mobile…

teaches the storage user request rental of the storage comprising various parameters…

teaches a data processing system and method for resource computer memory management in distributed systems comprising…

teaches the limitations above he fails to explicitly teach determining if the expandable function is not executable…
XXXXXXXXXXXX
40

CN101126992A

(苏工, 2008)
(Original Assignee) 国际商业机器公司     在网络中的多个节点中分配多个任务的方法和系统 processor usage, memory usage 的使用

community cloud 在网络

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches an action list specifying a set of selectable actions select object which represents underlying action such as…

teaches the invention substantially as claimed including a distributed system providing scalable methodology for real…

teaches associating said web service output operation with a set of output actions verify accuracy of response to web…

teaches the use of converting the usage of the hardware component into the power consumption comprises identifying a…
XXXXXXXXXXXX
41

CN101098306A

(罗伯特·E.·斯托姆, 2008)
(Original Assignee) 国际商业机器公司     在流处理系统中控制何时发送消息的方法和系统 cloud computing, cloud computing resource manager 一种计算

processor usage tracking 包括控制

access rates 可用的

community cloud 在网络

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses marking data packets that are configured to be transmitted via the second local wireless network communication…

teaches notifying a sender of an alert of changes in a status of said alert…

teaches the portable gateway has multiple interfaces paragraph…

teaches the method wherein the at least one predetermined criterion comprises at least one of an identity of the…
XXXXXXXXXXXXXXX
42

CN101410803A

(D·N·罗宾森, 2009)
(Original Assignee) 思杰系统有限公司     用于提供对计算环境的访问的方法和系统 memory usage 少一个数据

access rates, I/O access rates 用于访问

processor usage tracking 代理相关

cloud computing resource manager 一个表

community cloud 在网络

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches a transaction counter corresponding to an actual account…

teaches each virtual machine in the configuration embedding VNC…

discloses A browser for requesting and viewing external web applications pages on the local user interface of the MFD…

teaches the claimed invention wherein the functions are substantially the same as the apparatus of claim…
XXXXXXXXXXXXXXXX
43

US20080163239A1

(Suresh Sugumar, 2008)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
Method for dynamic load balancing on partitioned systems memory usage, memory usage tracking more processor cores

therein instructions second instruction, first instruction

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches using pre determined criteria to verify a matching target computing system…

teaches a method for migrating a partition or VM by a hypervisor for the purposes of load balancing…

teaches as follows A virtual machine can be migrated from one host computer to another utilizing LUN logic unit…

discloses A system for communicating to a storage controller in a virtualization environment comprising a plurality of…
XXXXXX
44

US20070079308A1

(Michael Chiaramonte, 2007)
(Original Assignee) Computer Associates Think Inc     

(Current Assignee)
Computer Associates Think Inc
Managing virtual machines processor usage, processor usage tracking more processors

I/O access rates more hardware

access rates lower limit

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses a computer program embodied on a computer readable medium see abstract lines…

teaches a method for use with a computing system comprising a first computer system controller…

teaches of assigning different priority equivalent to applicant s different priority class defined for each different…

discloses a method of optimizing virtual graphics processing unit utilization the method comprising assigning a…
XXXXXX
45

US20060155912A1

(Sumankumar Singh, 2006)
(Original Assignee) Dell Products LP     

(Current Assignee)
Dell Products LP
Server cluster having a virtual server cloud resources, cloud computing environment virtual machines

memory usage first operating

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses that the weighted scheme being based on needs of a user requesting the…

teaches wherein said step of updating is accomplished by a virtualizer…

teaches assigning memory from other partitions if a shared memory logical partition could not be put in the at least…

teaches a method of managing spillover via a plurality of cores of a multicore device intermediary to a plurality of…
XXXXXXXXXXXXXXXX
46

US20020002636A1

(Mark Vange, 2002)
(Original Assignee) Circadence Corp     

(Current Assignee)
Sons Of Innovation LLC
System and method for implementing application functionality within a network infrastructure Internet resources network services

processor usage selecting data

access rates selected set

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses a system whereby a gateway provides clients access to the…

discloses dispatching requests including a quality of service context…

teaches a distributed data processing system for executing database operations comprising a a first group including…

teaches wherein evaluating the operation using the alternate evaluator is faster than evaluating the operation using a…
XXXXXXXXX
47

US20020002611A1

(Mark Vange, 2002)
(Original Assignee) Circadence Corp     

(Current Assignee)
Sons Of Innovation LLC
System and method for shifting functionality between multiple web servers cloud resources, cloud computing environment network resource

Internet resources network services

access rates selected set

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses a system whereby a gateway provides clients access to the…

discloses dispatching requests including a quality of service context…

teaches a distributed data processing system for executing database operations comprising a a first group including…

teaches wherein evaluating the operation using the alternate evaluator is faster than evaluating the operation using a…
XXXXXXXXXXXXXX
48

US20020002622A1

(Mark Vange, 2002)
(Original Assignee) Circadence Corp     

(Current Assignee)
Sons Of Innovation LLC
Method and system for redirection to arbitrary front-ends in a communication system cloud computing environment network resources

alternate cloud, alternate cloud resources first channel

first resource first request

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses a system whereby a gateway provides clients access to the…

discloses dispatching requests including a quality of service context…

teaches a distributed data processing system for executing database operations comprising a a first group including…

teaches wherein evaluating the operation using the alternate evaluator is faster than evaluating the operation using a…
XXXXXXX
49

US20020004816A1

(Mark Vange, 2002)
(Original Assignee) Circadence Corp     

(Current Assignee)
Sons Of Innovation LLC
System and method for on-network storage services cloud computing data storage device

hybrid cloud storage location

replacement scheme attached storage

cloud resources storage system

memory usage storage area

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses a system whereby a gateway provides clients access to the…

discloses dispatching requests including a quality of service context…

teaches a distributed data processing system for executing database operations comprising a a first group including…

teaches wherein evaluating the operation using the alternate evaluator is faster than evaluating the operation using a…
XXXXXXXXXXXXXXXXXX
50

US20020002602A1

(Mark Vange, 2002)
(Original Assignee) Circadence Corp     

(Current Assignee)
Sons Of Innovation LLC
System and method for serving a web site from multiple servers alternate cloud, alternate cloud resources first channel

hybrid cloud other network

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses a system whereby a gateway provides clients access to the…

discloses dispatching requests including a quality of service context…

teaches a distributed data processing system for executing database operations comprising a a first group including…

teaches wherein evaluating the operation using the alternate evaluator is faster than evaluating the operation using a…
XXXXXX
51

US5655120A

(Martin Witte, 1997)
(Original Assignee) Siemens AG     

(Current Assignee)
Nokia Solutions and Networks GmbH and Co KG
Method for load balancing in a multi-processor system where arising jobs are processed by a plurality of processors under real-time conditions memory usage multi-processor system

I/O access rate specific value

second resource management scheme said time

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches operating said autonomous software agent to receive a fault response from said web service problem associate…

teaches the invention substantially as claimed including a distributed system providing scalable methodology for real…

teaches associating said web service output operation with a set of output actions verify accuracy of response to web…

discloses wherein the atomic instruction is one of an interlocked increment instruction and an interlocked decrement…
XXXXXXX
52

US20120096460A1

(Atsuji Sekiguchi, 2012)
(Original Assignee) Fujitsu Ltd     

(Current Assignee)
Fujitsu Ltd
Apparatus and method for controlling live-migrations of a plurality of virtual machines second resource second resource

first resource first resource

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the invention substantially as claimed including a method for migrating a service…

teaches virtual switches and master slave relationship and configuration of virtual network however…

teaches wherein the migration request further comprises a VM content file indicating shareable resources and non…

discloses mapping memory area from one virtual machine to another…
XXXXX
53

WO2011031459A2

(Ulas C. Kozat, 2011)
(Original Assignee) Ntt Docomo, Inc.     A method and apparatus for data center automation maximum capacity processing speed

alternate cloud resources, community cloud physical server

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches a computer program product for providing content management services in a…

discloses wherein said selecting said service location manager comprises…

discloses the user GUI for configuring inter alia virtual machines and operating systems…

teaches using a remote display protocol to access a client user interface web browser…
XXXXXX
54

US20100235845A1

(John P. Bates, 2010)
(Original Assignee) Sony Interactive Entertainment Inc     

(Current Assignee)
Sony Interactive Entertainment Inc
Sub-task processor distribution scheduling Internet resources more process

memory consumption rate output data

maximum capacity total size

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses the distributed processing system according to claim…

teaches software implementation of the above limitations for the purpose of providing a portable and flexible method…

teaches that the thread pool set comprises one or more threads configured to perform real time ray tracing…

teaches analyzing the read data portion to determine the data pattern includes identifying the data type based on the…
XXXXXX
55

US20110078691A1

(Huseyin S. Yildiz, 2011)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Structured task hierarchy for a parallel runtime processor usage, processor usage tracking more processors, executing code

cloud computing current task

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses wherein prioritizing the code in order of variation of inactivity over periods of sampling sequences comprises…

discloses using the operating system to manage a translation lookaside buffer page…

teaches a method for performing preemptive scheduling of a plurality of processes managed by a symmetric…

teaches in the event that the first thread has not reached a thread safe point and a second indication col…
XXXXXXXXXXXX
56

WO2010039023A2

(Khong Neng Choong, 2010)
(Original Assignee) Mimos Berhad     Method to assign traffic priority or bandwidth for application at the end users-device cloud resources loading one

maximum capacity file size

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses the means for comparing priority values assigned to the messages by the first means to the one or more…

discloses presenting respective indications of the detected information source devices to resource allocation module…

teaches includes a predefined maximum aggregate bit rate at which all services can be accessed…

discloses wherein the communicatively coupled remote device is a bandwidth broker or other generic policy server see…
XXXXXXXXXXXXXX
57

US20090204718A1

(Kevin P. Lawton, 2009)
(Original Assignee) Lawton Kevin P; Stevan Vlaovic     Using memory equivalency across compute clouds for accelerated virtual memory migration and memory de-duplication hybrid cloud storage location

processor usage internal memory

memory usage first operating

processor usage tracking processing unit

first resource second number

maximum capacity carrying one

Internet resources more process

consumption rate data values

access rates fewer bits

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches software is used to configure and supervise various partitions…

teaches a hypervisor that determines a target memory size for each virtual machine…

teaches the limitations substantially as claimed including the computer program of claim…

discloses identifying the present status of a computing resource pool wherein the resource pool comprises a plurality of…
XXXXXXXXXXXXX
58

WO2009088749A2

(Lior Assouline, 2009)
(Original Assignee) Harmonic, Inc.     Methods and system for efficient data transfer over hybrid fiber coax infrastructure second resource selecting content

maximum capacity coding parameter

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses wherein the timing information indicates that the request should be served earlier or later than an identified…

teaches the step b of receiving digital image data for said alternate scene sequence comprises the step of receiving…

teaches all of the claim limitations of the system as claimed in claims…

discloses IP packets addressed to process in the settop decoder of the TV and conditional access such that only paid…
XXXX
59

EP2065804A1

(Keitaro c/o Hitachi Ltd Intellectual Property Group Uehara, 2009)
(Original Assignee) Hitachi Ltd     

(Current Assignee)
Hitachi Ltd
Virtual machine monitor and multi-processor system processor usage, processor usage tracking more processors

replacement scheme receiving port

XXXXXXXXXXXXX
60

US20090222654A1

(Herbert Hum, 2009)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
Distribution of tasks among asymmetric processing elements I/O access rate processing elements

consumption rate, memory consumption rate low power state

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches the invention substantially as claimed including a method for controlling a task migration of a task in a…

teaches tracking the N of inactive processor cores unused cores…

teaches a system with heterogeneous cores and logic that keeps track of performance performance monitor information of…

discloses a method comprising receiving a performance state update from an operating system…
XXX
61

WO2008127622A2

(Aaftab Munshi, 2008)
(Original Assignee) Apple Inc.     Data parallel computing on multiple processors processor usage, I/O access rate central processing unit, processor usage

memory usage memory usage

XXXXXXXXX
62

WO2008127604A2

(Aaftab Munshi, 2008)
(Original Assignee) Apple Inc.     Shared stream memory on multiple processors processor usage, I/O access rate central processing unit

first resource allocating memory

Internet resources more process

35 U.S.C. 103(a) discloses in the absence of policies or constrain s to the contrary virtually any resource may be assigned to any domain…

discloses a service provider management device according to claim…
XXXXXXXXXX
63

EP2135163A2

(Aaftab Munshi, 2009)
(Original Assignee) Apple Inc     

(Current Assignee)
Apple Inc
Data parallel computing on multiple processors processor usage, I/O access rate central processing unit, processor usage

memory usage memory usage

XXXXXXXXX
64

EP2012233A2

(Russell J. Fenger, 2009)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
System and method to optimize OS scheduling decisions for power savings based on temporal charcteristics of the scheduled entity and system workload cloud computing, cloud computing resource manager shared resources

therein instructions other threads

XXXXXXXXXXX
65

WO2008093066A2

(Bob Tang, 2008)
(Original Assignee) Bob Tang     Immediate ready implementation of virtually congestion free guaranteed service capable network : nextgentcp/ftp/udp intermediate buffer cyclical sack re-use memory usage, memory consumption rate bandwidth requirements

I/O access rates destination nodes

hybrid cloud other network

access rates end user

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses a method for calculating a slowstart for TCP protocol for flow control of packets…

teaches a credit system that control the transmission rate and…

teaches a method of providing for congestion detection and of providing for relief of signaled congestion conditions…

discloses wherein said step of sending said flow control adjustment command comprises setting said flow control…
XXXXXXXXX
66

US20090019449A1

(Gyu-sang Choi, 2009)
(Original Assignee) Samsung Electronics Co Ltd     

(Current Assignee)
Samsung Electronics Co Ltd
Load balancing method and apparatus in symmetric multi-processor system memory usage multi-processor system

first resource, first resource management scheme selected two

XXXXXXX
67

US20080022073A1

(Millind Mittal, 2008)
(Original Assignee) Alacritech Inc     

(Current Assignee)
RPX Corp
Functional-level instruction-set computer architecture for processing application-layer content-service requests such as file-access requests processor usage, I/O access rate central processing unit

memory usage storage area

Internet resources more process

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses a storage manager receiving a request that comprises a plurality of parameters relating to a quality of…

discloses the invention substantially as described in claims…

discloses an application message that includes quality of service information…

teaches the TCPIP processor and engine using RDMA implementing a RDMA engine that provides various capabilities…
XXXXXXXXXXXX
68

US20080104587A1

(Daniel J. Magenheimer, 2008)
(Original Assignee) Hewlett Packard Development Co LP     

(Current Assignee)
Hewlett Packard Enterprise Development LP
Migrating a virtual machine from a first physical machine in response to receiving a command to lower a power mode of the first physical machine cloud resources, cloud computing environment virtual machines

maximum capacity lower power

processor usage tracking, memory usage tracking power mode

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches wherein the first and second system are arranged within a same housing column…

teaches further comprising determining a fee for the migration based upon at least one of the following a quantity of…

teaches a management entity that tracks utilization of the server resources based on the workload and controls the…

teaches a method for current control in a mobile terminal the method comprising the steps of measuring a total…
XXXXXXXXXXXXXXXXXX
69

US20080022282A1

(Ludmila Cherkasova, 2008)
(Original Assignee) Hewlett Packard Development Co LP     

(Current Assignee)
Hewlett Packard Enterprise Development LP
System and method for evaluating performance of a workload manager processor usage, I/O access rate central processing unit

memory usage load demand

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches monitoring memory usage CPU status and utilization see…

discloses calculating an event percentage of each function over events of all functions in the list of functions and…

teaches wherein the application programming interface is implemented on a computing platform that provides one or more…

teaches the provisioning and loading of the servers before allocating the server to a service see page…
XXXXXXXXX
70

US20070204265A1

(Jacob Oshins, 2007)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Migrating a virtual machine that owns a resource such as a hardware device cloud resources, cloud computing environment virtual machines

cloud computing second platform, first platform

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the invention substantially as claimed including a method for migrating a service…

teaches virtual switches and master slave relationship and configuration of virtual network however…

teaches wherein the migration request further comprises a VM content file indicating shareable resources and non…

discloses mapping memory area from one virtual machine to another…
XXXXXXXXXXXXXXXXXX
71

WO2007087828A1

(Per Willars, 2007)
(Original Assignee) Telefonaktiebogalet Lm Ericsson (Publ)     Method and devices for installing packet filters in a data transmission public cloud associated item

processor usage tracking processing unit

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(a)
teaches bearer selection is based on the operator policy rules see paragraphs…

discloses that a round robin scheduling method may be deployed…

teaches the method wherein the step of capturing raw traffic traces over standardized interfaces of the operational…

discloses wherein said at least one communications node comprises at least one of gateway general packet radio service…
XXXXXX
72

US20050120160A1

(Jerry Plouffe, 2005)
(Original Assignee) Virtual Iron Software Inc; Katana Technology Inc     

(Current Assignee)
Oracle International Corp ; Virtual Iron Software Inc
System and method for managing virtual servers alternate cloud processing resource

consumption rate, memory consumption rate virtual server

maximum capacity d line

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches the claimed limitations wherein providing a tenant with user access to the generated data collection…

discloses a communication system for selective packet mirroring with the following features regarding claim…

discloses the user GUI for configuring inter alia virtual machines and operating systems…

discloses wherein at least the first and seconds each comprise an operating system wherein the operating system of the…
XXXXXX
73

US20060069761A1

(Sumankumar Singh, 2006)
(Original Assignee) Dell Products LP     

(Current Assignee)
Dell Products LP
System and method for load balancing virtual machines in a computer network hybrid cloud resource availability

cloud resources, cloud computing environment virtual machines

processor usage memory resource

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses to monitor a status for each of the plurality of services…

teaches the content distribution system according to claim…

teaches a method wherein selecting a server to service the new HTTP request gives preference to the server to which…

discloses wherein the step of assigning the response target to the…
XXXXXXXXXXXXXXXX
74

US7337233B2

(Douglas M. Dillon, 2008)
(Original Assignee) Hughes Network Systems LLC     

(Current Assignee)
JPMorgan Chase Bank NA ; Hughes Network Systems LLC
Network system with TCP/IP protocol spoofing public cloud to receive data

processor usage tracking, alternate cloud resources include one keep track

XXXXXX
75

US20050060590A1

(David Bradley, 2005)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Power-aware workload balancing usig virtual machines first resource physical resources

cloud resources, cloud computing environment virtual machines

hybrid cloud storage location

maximum capacity supporting one

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches a management entity that tracks utilization of the server resources based on the workload and controls the…

teaches a threshold taking into account worst case conditions for voltage and leakage power for devices…

teaches the power supplies of the physical hosts are controlled when assigning the virtual machines to enable power…

discloses a method of processing waveform data from a device under test DUT comprising the steps of providing a test and…
XXXXXXXXXXXXXX
76

US7389330B2

(Douglas Dillon, 2008)
(Original Assignee) Hughes Network Systems LLC     

(Current Assignee)
DIRECTIV GROUP Inc ; JPMorgan Chase Bank NA ; Hughes Network Systems LLC
System and method for pre-fetching content in a proxy architecture I/O access rates configurable threshold

processor usage, processor usage tracking more processors

XXXXXX
77

US7336967B2

(Frank Kelly, 2008)
(Original Assignee) Hughes Network Systems LLC     

(Current Assignee)
JPMorgan Chase Bank NA ; Hughes Network Systems LLC
Method and system for providing load-sensitive bandwidth allocation processor usage, processor usage tracking more processors

public cloud to receive data

XXXXXXXXX
78

US20040062246A1

(Laurence Boucher, 2004)
(Original Assignee) Alacritech Inc     

(Current Assignee)
Alacritech Inc
High performance network interface processor usage tracking multiple processors

memory usage storage area

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the switch further comprising the ports interconnection between the hosts and the peripheral device operating…

discloses all the subject matter of the claimed invention as recited in claim…

discloses a method in a data processing system for transferring data between a host system and a network adapter…

discloses a memory system for a high performance IP processor comprising a specific application buffer allocated to a…
XXXXXXXXX
79

US20040010612A1

(Ashish Pandya, 2004)
(Original Assignee) Pandya Ashish A.     High performance IP processor using RDMA CPU consumption rate programmable packet

replacement scheme attached storage

processor usage internal memory

hybrid cloud, public cloud other network, layer remote

memory usage storage area

I/O access rate, I/O access rates output ports, input ports

first resource management scheme chip memory

memory consumption rate output data

maximum capacity d line

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the TCPIP processor and engine using RDMA implementing a RDMA engine that provides various capabilities…

discloses an of oad method comprising communicating data over a network utilizing a plurality of protocols associated…

discloses the present invention is as noted above in way limited to CAM arrays of any particular size width or depth…

discloses the sub system wherein the system includes a display coupled to the bus see…
XXXXXXXXXXXXXXXXXX
80

US7191318B2

(Tarun Kumar Tripathy, 2007)
(Original Assignee) Alacritech Inc     

(Current Assignee)
RPX Corp
Native copy instruction for file-access processor with copy-rule-based validation processor usage memory resource

alternate cloud resources current value

XXXXXXXXX
81

EP1876758A2

(Qian Zhang, 2008)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Corp
Peer-to-Peer method of quality of service (QoS) probing and analysis and infrastructure employing same change region size high quality

I/O access rate game server

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses the principles of the present invention for collecting and storing network performance information of…

teaches and the default network connection is the peertopeer network connection…

teaches sending packet probes over multiple different types of packetbased network structures…

discloses the claimed invention except may not have specifically mentioned timestamp speed eg update interval is one…
XXX
82

US6879526B2

(William Thomas Lynch, 2005)
(Original Assignee) Ring Tech Enterprises LLC     

(Current Assignee)
United Microelectronics Corp
Methods and apparatus for improved memory access first resource management scheme non-volatile memory, more inputs

access rates, I/O access rates transfer data

alternate cloud one output

processor usage load data, more set

XXXXXXXXXX
83

US20040064590A1

(Daryl Starr, 2004)
(Original Assignee) Alacritech Inc     

(Current Assignee)
Alacritech Inc
Intelligent network storage interface system hybrid cloud storage location

access rates second control

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the switch further comprising the ports interconnection between the hosts and the peripheral device operating…

teaches a cryptographic device comprising a cryptographic module see paragraph…

discloses using last come first serve logic with a MAC layer…

discloses the processor or the PCI bridge determines which of the different types of network traffic accesses a…
XXXXXX
84

US20040073703A1

(Laurence Boucher, 2004)
(Original Assignee) Alacritech Inc     

(Current Assignee)
Alacritech Inc
Fast-path apparatus for receiving data corresponding a TCP connection processor usage, I/O access rate central processing unit, hardware logic

therein instructions second instruction, first instruction

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches a cryptographic device comprising a cryptographic module see paragraph…

discloses all the subject matter of the claimed invention with the exception of the…

discloses using last come first serve logic with a MAC layer…

discloses the entire packet header and payload are passed to a buffer on host…
XXXXXX
85

US7237036B2

(Laurence B. Boucher, 2007)
(Original Assignee) Alacritech Inc     

(Current Assignee)
Alacritech Inc
Fast-path apparatus for receiving data corresponding a TCP connection therein instructions second instruction, first instruction

I/O access rate, access rates hardware logic, transfer data

XXX




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
COMPUTER. 28 (4): 14-22 APR 1995

Publication Year: 1995

DISTRIBUTED COORDINATION MODELS FOR CLIENT-SERVER COMPUTING

Coordinated Computing Technologies, Medford, MA, USA

Adler
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (individual client) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud (multiple service) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
DISTRIBUTED COORDINATION MODELS FOR CLIENT-SERVER COMPUTING . Interactions between distributed applications presuppose an underlying control model to coordinate information exchanges and networking software to implement that model . The client/server control model defines distributed interactions in terms of one program requesting and obtaining a service from a second , possibly remote , application . However , this basic model provides inadequate design support when clients need to invoke multiple , independent services , coordinated to reflect how those services interrelate and contribute to the overall application . The author describes extensions to the basic client/server model that explicitly address one-to-many client/server interactions by discussing three basic design issues : how multiple service (alternate cloud, alternate cloud resources, community cloud) s are requested , how those services are managed , and how clients receive responses . The extended coordination models provide support for locating , obtaining , and synchronizing services , as well as for collecting and combining results from multiple servers in a manner that is transparent to clients . Extended models include a scripting engine for managing data and temporal dependencies among services ;
a basic request broker for mediating client access to distributed services ;
and extended request broker models that decompose composite services , manage redundant servers , and replicate messages to logical server groups . These coordination models were designed as generic , programmable control services . The control services are interoperable , so they can be combined like building blocks to match application-specific coordination requirements . The one-to-many coordination services are layered on top of an object-oriented , message-passing communication substrate , which transparently manages the complexities of interprogram interactions across networks of heterogeneous computers . This layered architecture lets complex coordination behaviors be modeled and executed external to application elements . It accomplishes this through high-level application programming interface (API) calls . The resulting partitioning of application and generic distributed behaviors yields improved modularity , maintainability , and extensibility of individual client (maximum capacity) s and servers .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud (multiple service) resources include one or more of resources included in public cloud , resources included in community cloud (multiple service) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
DISTRIBUTED COORDINATION MODELS FOR CLIENT-SERVER COMPUTING . Interactions between distributed applications presuppose an underlying control model to coordinate information exchanges and networking software to implement that model . The client/server control model defines distributed interactions in terms of one program requesting and obtaining a service from a second , possibly remote , application . However , this basic model provides inadequate design support when clients need to invoke multiple , independent services , coordinated to reflect how those services interrelate and contribute to the overall application . The author describes extensions to the basic client/server model that explicitly address one-to-many client/server interactions by discussing three basic design issues : how multiple service (alternate cloud, alternate cloud resources, community cloud) s are requested , how those services are managed , and how clients receive responses . The extended coordination models provide support for locating , obtaining , and synchronizing services , as well as for collecting and combining results from multiple servers in a manner that is transparent to clients . Extended models include a scripting engine for managing data and temporal dependencies among services ;
a basic request broker for mediating client access (alternate cloud resources include one) to distributed services ;
and extended request broker models that decompose composite services , manage redundant servers , and replicate messages to logical server groups . These coordination models were designed as generic , programmable control services . The control services are interoperable , so they can be combined like building blocks to match application-specific coordination requirements . The one-to-many coordination services are layered on top of an object-oriented , message-passing communication substrate , which transparently manages the complexities of interprogram interactions across networks of heterogeneous computers . This layered architecture lets complex coordination behaviors be modeled and executed external to application elements . It accomplishes this through high-level application programming interface (API) calls . The resulting partitioning of application and generic distributed behaviors yields improved modularity , maintainability , and extensibility of individual clients and servers .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (individual client) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (multiple service) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
DISTRIBUTED COORDINATION MODELS FOR CLIENT-SERVER COMPUTING . Interactions between distributed applications presuppose an underlying control model to coordinate information exchanges and networking software to implement that model . The client/server control model defines distributed interactions in terms of one program requesting and obtaining a service from a second , possibly remote , application . However , this basic model provides inadequate design support when clients need to invoke multiple , independent services , coordinated to reflect how those services interrelate and contribute to the overall application . The author describes extensions to the basic client/server model that explicitly address one-to-many client/server interactions by discussing three basic design issues : how multiple service (alternate cloud, alternate cloud resources, community cloud) s are requested , how those services are managed , and how clients receive responses . The extended coordination models provide support for locating , obtaining , and synchronizing services , as well as for collecting and combining results from multiple servers in a manner that is transparent to clients . Extended models include a scripting engine for managing data and temporal dependencies among services ;
a basic request broker for mediating client access to distributed services ;
and extended request broker models that decompose composite services , manage redundant servers , and replicate messages to logical server groups . These coordination models were designed as generic , programmable control services . The control services are interoperable , so they can be combined like building blocks to match application-specific coordination requirements . The one-to-many coordination services are layered on top of an object-oriented , message-passing communication substrate , which transparently manages the complexities of interprogram interactions across networks of heterogeneous computers . This layered architecture lets complex coordination behaviors be modeled and executed external to application elements . It accomplishes this through high-level application programming interface (API) calls . The resulting partitioning of application and generic distributed behaviors yields improved modularity , maintainability , and extensibility of individual client (maximum capacity) s and servers .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud (multiple service) resources include one or more of resources included in public cloud , resources included in community cloud (multiple service) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
DISTRIBUTED COORDINATION MODELS FOR CLIENT-SERVER COMPUTING . Interactions between distributed applications presuppose an underlying control model to coordinate information exchanges and networking software to implement that model . The client/server control model defines distributed interactions in terms of one program requesting and obtaining a service from a second , possibly remote , application . However , this basic model provides inadequate design support when clients need to invoke multiple , independent services , coordinated to reflect how those services interrelate and contribute to the overall application . The author describes extensions to the basic client/server model that explicitly address one-to-many client/server interactions by discussing three basic design issues : how multiple service (alternate cloud, alternate cloud resources, community cloud) s are requested , how those services are managed , and how clients receive responses . The extended coordination models provide support for locating , obtaining , and synchronizing services , as well as for collecting and combining results from multiple servers in a manner that is transparent to clients . Extended models include a scripting engine for managing data and temporal dependencies among services ;
a basic request broker for mediating client access (alternate cloud resources include one) to distributed services ;
and extended request broker models that decompose composite services , manage redundant servers , and replicate messages to logical server groups . These coordination models were designed as generic , programmable control services . The control services are interoperable , so they can be combined like building blocks to match application-specific coordination requirements . The one-to-many coordination services are layered on top of an object-oriented , message-passing communication substrate , which transparently manages the complexities of interprogram interactions across networks of heterogeneous computers . This layered architecture lets complex coordination behaviors be modeled and executed external to application elements . It accomplishes this through high-level application programming interface (API) calls . The resulting partitioning of application and generic distributed behaviors yields improved modularity , maintainability , and extensibility of individual clients and servers .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (individual client) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (multiple service) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
DISTRIBUTED COORDINATION MODELS FOR CLIENT-SERVER COMPUTING . Interactions between distributed applications presuppose an underlying control model to coordinate information exchanges and networking software to implement that model . The client/server control model defines distributed interactions in terms of one program requesting and obtaining a service from a second , possibly remote , application . However , this basic model provides inadequate design support when clients need to invoke multiple , independent services , coordinated to reflect how those services interrelate and contribute to the overall application . The author describes extensions to the basic client/server model that explicitly address one-to-many client/server interactions by discussing three basic design issues : how multiple service (alternate cloud, alternate cloud resources, community cloud) s are requested , how those services are managed , and how clients receive responses . The extended coordination models provide support for locating , obtaining , and synchronizing services , as well as for collecting and combining results from multiple servers in a manner that is transparent to clients . Extended models include a scripting engine for managing data and temporal dependencies among services ;
a basic request broker for mediating client access to distributed services ;
and extended request broker models that decompose composite services , manage redundant servers , and replicate messages to logical server groups . These coordination models were designed as generic , programmable control services . The control services are interoperable , so they can be combined like building blocks to match application-specific coordination requirements . The one-to-many coordination services are layered on top of an object-oriented , message-passing communication substrate , which transparently manages the complexities of interprogram interactions across networks of heterogeneous computers . This layered architecture lets complex coordination behaviors be modeled and executed external to application elements . It accomplishes this through high-level application programming interface (API) calls . The resulting partitioning of application and generic distributed behaviors yields improved modularity , maintainability , and extensibility of individual client (maximum capacity) s and servers .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud (multiple service) resources include one or more of resources included in public cloud , resources included in community cloud (multiple service) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
DISTRIBUTED COORDINATION MODELS FOR CLIENT-SERVER COMPUTING . Interactions between distributed applications presuppose an underlying control model to coordinate information exchanges and networking software to implement that model . The client/server control model defines distributed interactions in terms of one program requesting and obtaining a service from a second , possibly remote , application . However , this basic model provides inadequate design support when clients need to invoke multiple , independent services , coordinated to reflect how those services interrelate and contribute to the overall application . The author describes extensions to the basic client/server model that explicitly address one-to-many client/server interactions by discussing three basic design issues : how multiple service (alternate cloud, alternate cloud resources, community cloud) s are requested , how those services are managed , and how clients receive responses . The extended coordination models provide support for locating , obtaining , and synchronizing services , as well as for collecting and combining results from multiple servers in a manner that is transparent to clients . Extended models include a scripting engine for managing data and temporal dependencies among services ;
a basic request broker for mediating client access (alternate cloud resources include one) to distributed services ;
and extended request broker models that decompose composite services , manage redundant servers , and replicate messages to logical server groups . These coordination models were designed as generic , programmable control services . The control services are interoperable , so they can be combined like building blocks to match application-specific coordination requirements . The one-to-many coordination services are layered on top of an object-oriented , message-passing communication substrate , which transparently manages the complexities of interprogram interactions across networks of heterogeneous computers . This layered architecture lets complex coordination behaviors be modeled and executed external to application elements . It accomplishes this through high-level application programming interface (API) calls . The resulting partitioning of application and generic distributed behaviors yields improved modularity , maintainability , and extensibility of individual clients and servers .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
2008 IEEE NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, VOLS 1 AND 2. : 363-370 2008

Publication Year: 2008

ReCon: A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers

IBM India Research Laboratory

Mehta, Neogi, Ieee
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines, physical server) , comprising : determining a consumption rate (power cost) of cloud resources (virtual machines, physical server) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power cost (consumption rate, memory consumption rate) s of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines, physical server) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines, physical server) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines, physical server) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines, physical server) include one or more of resources included in public cloud , resources included in community cloud (virtual machines, physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines, physical server) , or resources included in virtual private networks (VPNs) .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines, physical server) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate (power cost) of cloud resources (virtual machines, physical server) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines, physical server) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power cost (consumption rate, memory consumption rate) s of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines, physical server) using a low inter-reference recency set (LIRS) replacement scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines, physical server) include one or more of resources included in public cloud , resources included in community cloud (virtual machines, physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines, physical server) , or resources included in virtual private networks (VPNs) .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines, physical server) using least recently used (LRU) replacement scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines, physical server) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate (power cost) of cloud resources (virtual machines, physical server) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power cost (consumption rate, memory consumption rate) s of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines, physical server) using a low inter-reference recency set (LIRS) replacement scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines, physical server) include one or more of resources included in public cloud , resources included in community cloud (virtual machines, physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines, physical server) , or resources included in virtual private networks (VPNs) .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines, physical server) using least recently used (LRU) replacement scheme .
ReCon : A Tool To Recommend Dynamic Server Consolidation In Multi-cluster Data Centers . Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center . The ability to migrate virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) dynamically between physical server (cloud resources, cloud computing environment, alternate cloud resources, Internet resources, community cloud, alternate cloud resources include one) s in real-time has also added a dynamic aspect to consolidation . However , there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting . In this paper we describe such a consolidation recommendation tool , called ReCon . Recon takes static and dynamic costs of given servers , the costs of VM migration , the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time . We also present the results of applying the tool on historical data obtained from a large production environment .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
2000 IEEE SERVICE PORTABILITY AND VIRTUAL CUSTOMER ENVIRONMENTS. : 20-28 2001

Publication Year: 2001

Achieving Service Portability In ICEBERG

University of California, Berkeley

Mao, Katz, Ieee, Ieee
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates (end user) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud (end devices) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Achieving Service Portability In ICEBERG . There is a growing trend toward service access through heterogeneous devices from diverse networks . It thus becomes crucial to provide support for mobility and portability at the application and service level to enable seamless service access from any end devices (alternate cloud) and access networks . Such support allows transparent roaming and ubiquitous service access . In the ICEBERG [1] project , our goal is to develop such a service infrastructure to integrate a variety telephony and data services spanning diverse access networks reaching heterogeneous end user (access rates) s . In this paper , we discuss our techniques for achieving goals of personal and service mobility , transparent network- and device-independent service access , as well as highly scalable and fault-tolerant access to composed service entities across the wide area . We evaluate our implementation through applications such as Universal In-box , Interactive Voice Room Control , MP3-Jukebox access using a cell-phone , and realtime video delivery to wireless clients .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud (end devices) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Achieving Service Portability In ICEBERG . There is a growing trend toward service access through heterogeneous devices from diverse networks . It thus becomes crucial to provide support for mobility and portability at the application and service level to enable seamless service access from any end devices (alternate cloud) and access networks . Such support allows transparent roaming and ubiquitous service access . In the ICEBERG [1] project , our goal is to develop such a service infrastructure to integrate a variety telephony and data services spanning diverse access networks reaching heterogeneous end users . In this paper , we discuss our techniques for achieving goals of personal and service mobility , transparent network- and device-independent service access , as well as highly scalable and fault-tolerant access to composed service entities across the wide area . We evaluate our implementation through applications such as Universal In-box , Interactive Voice Room Control , MP3-Jukebox access using a cell-phone , and realtime video delivery to wireless clients .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates (end user) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (end devices) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Achieving Service Portability In ICEBERG . There is a growing trend toward service access through heterogeneous devices from diverse networks . It thus becomes crucial to provide support for mobility and portability at the application and service level to enable seamless service access from any end devices (alternate cloud) and access networks . Such support allows transparent roaming and ubiquitous service access . In the ICEBERG [1] project , our goal is to develop such a service infrastructure to integrate a variety telephony and data services spanning diverse access networks reaching heterogeneous end user (access rates) s . In this paper , we discuss our techniques for achieving goals of personal and service mobility , transparent network- and device-independent service access , as well as highly scalable and fault-tolerant access to composed service entities across the wide area . We evaluate our implementation through applications such as Universal In-box , Interactive Voice Room Control , MP3-Jukebox access using a cell-phone , and realtime video delivery to wireless clients .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud (end devices) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Achieving Service Portability In ICEBERG . There is a growing trend toward service access through heterogeneous devices from diverse networks . It thus becomes crucial to provide support for mobility and portability at the application and service level to enable seamless service access from any end devices (alternate cloud) and access networks . Such support allows transparent roaming and ubiquitous service access . In the ICEBERG [1] project , our goal is to develop such a service infrastructure to integrate a variety telephony and data services spanning diverse access networks reaching heterogeneous end users . In this paper , we discuss our techniques for achieving goals of personal and service mobility , transparent network- and device-independent service access , as well as highly scalable and fault-tolerant access to composed service entities across the wide area . We evaluate our implementation through applications such as Universal In-box , Interactive Voice Room Control , MP3-Jukebox access using a cell-phone , and realtime video delivery to wireless clients .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (end user) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (end devices) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Achieving Service Portability In ICEBERG . There is a growing trend toward service access through heterogeneous devices from diverse networks . It thus becomes crucial to provide support for mobility and portability at the application and service level to enable seamless service access from any end devices (alternate cloud) and access networks . Such support allows transparent roaming and ubiquitous service access . In the ICEBERG [1] project , our goal is to develop such a service infrastructure to integrate a variety telephony and data services spanning diverse access networks reaching heterogeneous end user (access rates) s . In this paper , we discuss our techniques for achieving goals of personal and service mobility , transparent network- and device-independent service access , as well as highly scalable and fault-tolerant access to composed service entities across the wide area . We evaluate our implementation through applications such as Universal In-box , Interactive Voice Room Control , MP3-Jukebox access using a cell-phone , and realtime video delivery to wireless clients .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud (end devices) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Achieving Service Portability In ICEBERG . There is a growing trend toward service access through heterogeneous devices from diverse networks . It thus becomes crucial to provide support for mobility and portability at the application and service level to enable seamless service access from any end devices (alternate cloud) and access networks . Such support allows transparent roaming and ubiquitous service access . In the ICEBERG [1] project , our goal is to develop such a service infrastructure to integrate a variety telephony and data services spanning diverse access networks reaching heterogeneous end users . In this paper , we discuss our techniques for achieving goals of personal and service mobility , transparent network- and device-independent service access , as well as highly scalable and fault-tolerant access to composed service entities across the wide area . We evaluate our implementation through applications such as Universal In-box , Interactive Voice Room Control , MP3-Jukebox access using a cell-phone , and realtime video delivery to wireless clients .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
ICS 09: PROCEEDINGS OF THE 2009 ACM SIGARCH INTERNATIONAL CONFERENCE ON SUPERCOMPUTING. : 225-234 2009

Publication Year: 2009

Virtualization Polling Engine (VPE): Using Dedicated CPU Cores To Accelerate I/O Virtualization

IBM Thomas J. Watson Research Center

Liu, Abali, Gschwind, Nicolau, Salapura, Moreira
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (Virtual machine) , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Virtualization Polling Engine (VPE) : Using Dedicated CPU Cores To Accelerate I/O Virtualization . Virtual machine (cloud computing environment) (VM) technologies are making rapid progress and VM performance is approaching that of native hardware in many aspects . Achieving high performance for I/O virtualization remains a challenge , however , especially for high speed networking devices such as 10 Gigabit Ethernet (10 GbE) NICs . Traditional software-based approaches to I/O virtualization usually stiffer significant performance degradation compared with native hardware . Hardware-based approaches that allow direct device access in VMs can achieve good performance , albeit at the expense of increased hardware cost and increased complexity in achieving tasks such as VM checkpointing , migration , and record/reply . Recently , the trend in microprocessor design has shifted from achieving higher CPU frequencies to putting more cores in a single chip , thus the cost of each core is rapidly decreasing . In this paper , we propose a new I/O virtualization approach called the Virtualization Polling Engine (VPE) . VPE introduces a concept called virtualization onload , which takes advantage of dedicated CPU cores to help with the virtualization of I/O devices by using an event-driven execution model with dedicated polling threads . It can significantly reduce virtualization overhead and achieve performance close to the hardware-based approaches without requiring special hardware support . Using our VPE approach , we developed a prototype called KVM-VPE to provide Ethernet virtualization support for KVM . Our experiments in a 10GbE testbed showed that VPE significantly outperformed the original KVM . In Netperf TCP tests our prototype achieved over 5 times the bandwidth for transmitting (Tx) and over 3 times the bandwidth for receiving (Rx) compared with the original KVM . KVM-VPE also supports direct user application access to the virtual Ethernet interfaces and achieved 7 . 4 mu s end-to-end latency between two VMs on different machines in our testbed . Overall , our research demonstrated that VPE is a promising approach to high performance I/O virtualization in the coming multicore era .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (direct user) , or resources included in virtual private networks (VPNs) .
Virtualization Polling Engine (VPE) : Using Dedicated CPU Cores To Accelerate I/O Virtualization . Virtual machine (VM) technologies are making rapid progress and VM performance is approaching that of native hardware in many aspects . Achieving high performance for I/O virtualization remains a challenge , however , especially for high speed networking devices such as 10 Gigabit Ethernet (10 GbE) NICs . Traditional software-based approaches to I/O virtualization usually stiffer significant performance degradation compared with native hardware . Hardware-based approaches that allow direct device access in VMs can achieve good performance , albeit at the expense of increased hardware cost and increased complexity in achieving tasks such as VM checkpointing , migration , and record/reply . Recently , the trend in microprocessor design has shifted from achieving higher CPU frequencies to putting more cores in a single chip , thus the cost of each core is rapidly decreasing . In this paper , we propose a new I/O virtualization approach called the Virtualization Polling Engine (VPE) . VPE introduces a concept called virtualization onload , which takes advantage of dedicated CPU cores to help with the virtualization of I/O devices by using an event-driven execution model with dedicated polling threads . It can significantly reduce virtualization overhead and achieve performance close to the hardware-based approaches without requiring special hardware support . Using our VPE approach , we developed a prototype called KVM-VPE to provide Ethernet virtualization support for KVM . Our experiments in a 10GbE testbed showed that VPE significantly outperformed the original KVM . In Netperf TCP tests our prototype achieved over 5 times the bandwidth for transmitting (Tx) and over 3 times the bandwidth for receiving (Rx) compared with the original KVM . KVM-VPE also supports direct user (Internet resources) application access to the virtual Ethernet interfaces and achieved 7 . 4 mu s end-to-end latency between two VMs on different machines in our testbed . Overall , our research demonstrated that VPE is a promising approach to high performance I/O virtualization in the coming multicore era .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (Virtual machine) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Virtualization Polling Engine (VPE) : Using Dedicated CPU Cores To Accelerate I/O Virtualization . Virtual machine (cloud computing environment) (VM) technologies are making rapid progress and VM performance is approaching that of native hardware in many aspects . Achieving high performance for I/O virtualization remains a challenge , however , especially for high speed networking devices such as 10 Gigabit Ethernet (10 GbE) NICs . Traditional software-based approaches to I/O virtualization usually stiffer significant performance degradation compared with native hardware . Hardware-based approaches that allow direct device access in VMs can achieve good performance , albeit at the expense of increased hardware cost and increased complexity in achieving tasks such as VM checkpointing , migration , and record/reply . Recently , the trend in microprocessor design has shifted from achieving higher CPU frequencies to putting more cores in a single chip , thus the cost of each core is rapidly decreasing . In this paper , we propose a new I/O virtualization approach called the Virtualization Polling Engine (VPE) . VPE introduces a concept called virtualization onload , which takes advantage of dedicated CPU cores to help with the virtualization of I/O devices by using an event-driven execution model with dedicated polling threads . It can significantly reduce virtualization overhead and achieve performance close to the hardware-based approaches without requiring special hardware support . Using our VPE approach , we developed a prototype called KVM-VPE to provide Ethernet virtualization support for KVM . Our experiments in a 10GbE testbed showed that VPE significantly outperformed the original KVM . In Netperf TCP tests our prototype achieved over 5 times the bandwidth for transmitting (Tx) and over 3 times the bandwidth for receiving (Rx) compared with the original KVM . KVM-VPE also supports direct user application access to the virtual Ethernet interfaces and achieved 7 . 4 mu s end-to-end latency between two VMs on different machines in our testbed . Overall , our research demonstrated that VPE is a promising approach to high performance I/O virtualization in the coming multicore era .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (direct user) , or resources included in virtual private networks (VPNs) .
Virtualization Polling Engine (VPE) : Using Dedicated CPU Cores To Accelerate I/O Virtualization . Virtual machine (VM) technologies are making rapid progress and VM performance is approaching that of native hardware in many aspects . Achieving high performance for I/O virtualization remains a challenge , however , especially for high speed networking devices such as 10 Gigabit Ethernet (10 GbE) NICs . Traditional software-based approaches to I/O virtualization usually stiffer significant performance degradation compared with native hardware . Hardware-based approaches that allow direct device access in VMs can achieve good performance , albeit at the expense of increased hardware cost and increased complexity in achieving tasks such as VM checkpointing , migration , and record/reply . Recently , the trend in microprocessor design has shifted from achieving higher CPU frequencies to putting more cores in a single chip , thus the cost of each core is rapidly decreasing . In this paper , we propose a new I/O virtualization approach called the Virtualization Polling Engine (VPE) . VPE introduces a concept called virtualization onload , which takes advantage of dedicated CPU cores to help with the virtualization of I/O devices by using an event-driven execution model with dedicated polling threads . It can significantly reduce virtualization overhead and achieve performance close to the hardware-based approaches without requiring special hardware support . Using our VPE approach , we developed a prototype called KVM-VPE to provide Ethernet virtualization support for KVM . Our experiments in a 10GbE testbed showed that VPE significantly outperformed the original KVM . In Netperf TCP tests our prototype achieved over 5 times the bandwidth for transmitting (Tx) and over 3 times the bandwidth for receiving (Rx) compared with the original KVM . KVM-VPE also supports direct user (Internet resources) application access to the virtual Ethernet interfaces and achieved 7 . 4 mu s end-to-end latency between two VMs on different machines in our testbed . Overall , our research demonstrated that VPE is a promising approach to high performance I/O virtualization in the coming multicore era .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (Virtual machine) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate (CPU core) , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Virtualization Polling Engine (VPE) : Using Dedicated CPU Cores To Accelerate I/O Virtualization . Virtual machine (cloud computing environment) (VM) technologies are making rapid progress and VM performance is approaching that of native hardware in many aspects . Achieving high performance for I/O virtualization remains a challenge , however , especially for high speed networking devices such as 10 Gigabit Ethernet (10 GbE) NICs . Traditional software-based approaches to I/O virtualization usually stiffer significant performance degradation compared with native hardware . Hardware-based approaches that allow direct device access in VMs can achieve good performance , albeit at the expense of increased hardware cost and increased complexity in achieving tasks such as VM checkpointing , migration , and record/reply . Recently , the trend in microprocessor design has shifted from achieving higher CPU frequencies to putting more cores in a single chip , thus the cost of each core is rapidly decreasing . In this paper , we propose a new I/O virtualization approach called the Virtualization Polling Engine (VPE) . VPE introduces a concept called virtualization onload , which takes advantage of dedicated CPU core (CPU consumption rate) s to help with the virtualization of I/O devices by using an event-driven execution model with dedicated polling threads . It can significantly reduce virtualization overhead and achieve performance close to the hardware-based approaches without requiring special hardware support . Using our VPE approach , we developed a prototype called KVM-VPE to provide Ethernet virtualization support for KVM . Our experiments in a 10GbE testbed showed that VPE significantly outperformed the original KVM . In Netperf TCP tests our prototype achieved over 5 times the bandwidth for transmitting (Tx) and over 3 times the bandwidth for receiving (Rx) compared with the original KVM . KVM-VPE also supports direct user application access to the virtual Ethernet interfaces and achieved 7 . 4 mu s end-to-end latency between two VMs on different machines in our testbed . Overall , our research demonstrated that VPE is a promising approach to high performance I/O virtualization in the coming multicore era .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (direct user) , or resources included in virtual private networks (VPNs) .
Virtualization Polling Engine (VPE) : Using Dedicated CPU Cores To Accelerate I/O Virtualization . Virtual machine (VM) technologies are making rapid progress and VM performance is approaching that of native hardware in many aspects . Achieving high performance for I/O virtualization remains a challenge , however , especially for high speed networking devices such as 10 Gigabit Ethernet (10 GbE) NICs . Traditional software-based approaches to I/O virtualization usually stiffer significant performance degradation compared with native hardware . Hardware-based approaches that allow direct device access in VMs can achieve good performance , albeit at the expense of increased hardware cost and increased complexity in achieving tasks such as VM checkpointing , migration , and record/reply . Recently , the trend in microprocessor design has shifted from achieving higher CPU frequencies to putting more cores in a single chip , thus the cost of each core is rapidly decreasing . In this paper , we propose a new I/O virtualization approach called the Virtualization Polling Engine (VPE) . VPE introduces a concept called virtualization onload , which takes advantage of dedicated CPU cores to help with the virtualization of I/O devices by using an event-driven execution model with dedicated polling threads . It can significantly reduce virtualization overhead and achieve performance close to the hardware-based approaches without requiring special hardware support . Using our VPE approach , we developed a prototype called KVM-VPE to provide Ethernet virtualization support for KVM . Our experiments in a 10GbE testbed showed that VPE significantly outperformed the original KVM . In Netperf TCP tests our prototype achieved over 5 times the bandwidth for transmitting (Tx) and over 3 times the bandwidth for receiving (Rx) compared with the original KVM . KVM-VPE also supports direct user (Internet resources) application access to the virtual Ethernet interfaces and achieved 7 . 4 mu s end-to-end latency between two VMs on different machines in our testbed . Overall , our research demonstrated that VPE is a promising approach to high performance I/O virtualization in the coming multicore era .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
2007 10TH IFIP/IEEE INTERNATIONAL SYMPOSIUM ON INTEGRATED NETWORK MANAGEMENT (IM 2009), VOLS 1 AND 2. : 139-148 2007

Publication Year: 2007

Server Virtualization In Autonomic Management Of Heterogeneous Workloads

International Business Machines Corporation

Steinder, Whalley, Carrera, Gaweda, Chess, Ieee
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines) , comprising : determining a consumption rate (virtual server) of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual server (consumption rate, memory consumption rate) s in autonomic datacenter management .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate (virtual server) of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual server (consumption rate, memory consumption rate) s in autonomic datacenter management .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate (virtual server) of cloud resources (virtual machines) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual server (consumption rate, memory consumption rate) s in autonomic datacenter management .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
Server Virtualization In Autonomic Management Of Heterogeneous Workloads . Server virtualization opens up a range of new possibilities for autonomic datacenter management , through the availability of new automation mechanisms that can be exploited to control and monitor tasks running within virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) . This offers not only new and more flexible control to the operator using a management console , but also more powerful and flexible autonomic control , through management software that maintains the system in a desired state in the face of changing workload and demand . This paper explores in particular the use of server virtualization technology in the autonomic management of data centers running a heterogeneous mix of workloads . We present a system that manages heterogeneous workloads to their performance goals and demonstrate its effectiveness via real-system experiments and simulation . We also present some of the significant challenges to wider usage of virtual servers in autonomic datacenter management .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
COMPUTER COMMUNICATIONS. 29 (9): 1271-1283 Sp. Iss. SI MAY 31 2006

Publication Year: 2006

Design And Development Of Ethernet-based Storage Area Network Protocol

Agency for Science, Technology and Research, Singapore (A*STAR)

Wang, Yeo, Zhu, Chong, Chai, Zhou, Bitwas
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Design And Development Of Ethernet-based Storage Area Network Protocol . In response to the trend of rapidly growing volume of and increasingly critical role played by data storage , there is a strong demand to share storage over network . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed to transport the Small Computer System Interface (SCSI) protocol data across Ethernet and provide relatively high performance with simplicity and low cost . It is demonstrated through comprehensive experiment results that HyperSCSI is capable of using the existing Ethernet-based network infrastructure , common off-the-shelf hardware , and well-established storage technologies , and turning them into a high performance and reliable network storage solution . Although HyperSCSI protocol is developed with the focus on local area network , a field trial has been demonstrated that it can leverage high-speed optical network with Generalized Multi-Protocol Label Switching (GMPLS) control to support wide area storage applications . Such protocol design work also provides a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure . (C) 2005 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (storage area) tracking .
Design And Development Of Ethernet-based Storage Area Network Protocol . In response to the trend of rapidly growing volume of and increasingly critical role played by data storage , there is a strong demand to share storage over network . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed to transport the Small Computer System Interface (SCSI) protocol data across Ethernet and provide relatively high performance with simplicity and low cost . It is demonstrated through comprehensive experiment results that HyperSCSI is capable of using the existing Ethernet-based network infrastructure , common off-the-shelf hardware , and well-established storage technologies , and turning them into a high performance and reliable network storage solution . Although HyperSCSI protocol is developed with the focus on local area network , a field trial has been demonstrated that it can leverage high-speed optical network with Generalized Multi-Protocol Label Switching (GMPLS) control to support wide area storage applications . Such protocol design work also provides a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure . (C) 2005 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Design And Development Of Ethernet-based Storage Area Network Protocol . In response to the trend of rapidly growing volume of and increasingly critical role played by data storage , there is a strong demand to share storage over network . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed to transport the Small Computer System Interface (SCSI) protocol data across Ethernet and provide relatively high performance with simplicity and low cost . It is demonstrated through comprehensive experiment results that HyperSCSI is capable of using the existing Ethernet-based network infrastructure , common off-the-shelf hardware , and well-established storage technologies , and turning them into a high performance and reliable network storage solution . Although HyperSCSI protocol is developed with the focus on local area network , a field trial has been demonstrated that it can leverage high-speed optical network with Generalized Multi-Protocol Label Switching (GMPLS) control to support wide area storage applications . Such protocol design work also provides a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure . (C) 2005 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
Design And Development Of Ethernet-based Storage Area Network Protocol . In response to the trend of rapidly growing volume of and increasingly critical role played by data storage , there is a strong demand to share storage over network . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed to transport the Small Computer System Interface (SCSI) protocol data across Ethernet and provide relatively high performance with simplicity and low cost . It is demonstrated through comprehensive experiment results that HyperSCSI is capable of using the existing Ethernet-based network infrastructure , common off-the-shelf hardware , and well-established storage technologies , and turning them into a high performance and reliable network storage solution . Although HyperSCSI protocol is developed with the focus on local area network , a field trial has been demonstrated that it can leverage high-speed optical network with Generalized Multi-Protocol Label Switching (GMPLS) control to support wide area storage applications . Such protocol design work also provides a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure . (C) 2005 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate (network protocol) , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (storage area) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Design And Development Of Ethernet-based Storage Area Network Protocol . In response to the trend of rapidly growing volume of and increasingly critical role played by data storage , there is a strong demand to share storage over network . In this paper , we present an Ethernet-based storage area (memory usage) network protocol (memory consumption rate) , called HyperSCSI . It is designed to transport the Small Computer System Interface (SCSI) protocol data across Ethernet and provide relatively high performance with simplicity and low cost . It is demonstrated through comprehensive experiment results that HyperSCSI is capable of using the existing Ethernet-based network infrastructure , common off-the-shelf hardware , and well-established storage technologies , and turning them into a high performance and reliable network storage solution . Although HyperSCSI protocol is developed with the focus on local area network , a field trial has been demonstrated that it can leverage high-speed optical network with Generalized Multi-Protocol Label Switching (GMPLS) control to support wide area storage applications . Such protocol design work also provides a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure . (C) 2005 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
Design And Development Of Ethernet-based Storage Area Network Protocol . In response to the trend of rapidly growing volume of and increasingly critical role played by data storage , there is a strong demand to share storage over network . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed to transport the Small Computer System Interface (SCSI) protocol data across Ethernet and provide relatively high performance with simplicity and low cost . It is demonstrated through comprehensive experiment results that HyperSCSI is capable of using the existing Ethernet-based network infrastructure , common off-the-shelf hardware , and well-established storage technologies , and turning them into a high performance and reliable network storage solution . Although HyperSCSI protocol is developed with the focus on local area network , a field trial has been demonstrated that it can leverage high-speed optical network with Generalized Multi-Protocol Label Switching (GMPLS) control to support wide area storage applications . Such protocol design work also provides a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure . (C) 2005 Elsevier B . V . All rights reserved .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
MIDDLEWARE 2006, PROCEEDINGS. 4290: 342-362 2006

Publication Year: 2006

Enforcing Performance Isolation Across Virtual Machines In Xen

University of California, San Diego

Gupta, Cherkasova, Gardner, Vahdat, Vansteen, Henning
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines) , comprising : determining a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource (Virtual machines) management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (second resource) (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the second resource (Virtual machines) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (second resource) (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager (allocating resources) to : determine a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource (Virtual machines) management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (second resource) (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager (allocating resources) to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager (allocating resources) to use LIRS based processor usage tracking .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager (allocating resources) to use LIRS based memory usage tracking .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager (allocating resources) to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines) , comprising : a processor ;

a cloud computing resource manager (allocating resources) communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (virtual machines) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource (Virtual machines) management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (second resource) (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager (allocating resources) to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager (allocating resources) to use LIRS based processor usage tracking .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager (allocating resources) to use LIRS based memory usage tracking .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager (allocating resources) to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
Enforcing Performance Isolation Across Virtual Machines In Xen . Virtual machines (VMs) have recently emerged as the basis for allocating resources (cloud computing resource manager) in enterprise settings and hosting centers . One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics . However , such multiplexing must often be done while observing per-VM performance guarantees or service level agreements . Thus , one important requirement in this environment is effective performance isolation among VMs . In this paper , we address performance isolation across virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) in Xen [1] . For instance , while Xen can allocate fixed shares of CPU among competing VMs , it does not currently account for work done on behalf of individual VMs in device drivers . Thus , the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place . In this paper , we present the design and evaluation of a set of primitives implemented in Xen to address this issue . First , XenMon accurately measures per-VM resource consumption , including work done on behalf of a particular VM in Xen's driver domains . Next , our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU . Finally , ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits . Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
2006 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, VOLS 1 AND 2. : 373-381 2006

Publication Year: 2006

Application Performance Management In Virtualized Server Environments

Purdue University

Khanna, Beaty, Kar, Kochut, Ieee
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme (key performance) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources (physical server) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Application Performance Management In Virtualized Server Environments . As businesses have grown , so has the need to deploy I/T applications rapidly to support the expanding business processes . Often , this growth was achieved In an unplanned way : each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased . In many cases this has led to what Is often referred to its "server sprawl" , resulting in low server utilization and high system management costs . An architectural approach that Is becoming Increasingly popular to address this problem is known as server virtualization . In this paper we introduce the concept of server consolidation using virtualization and point out associated Issues that arise In the area or application performance . We show how some of these problems can be solved by monitoring key performance (second resource management scheme) metrics and using lite data to trigger migration of Virtual Machines within physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s . The algorithms we Present attempt to minimize the cost of migration and maintain acceptable application performance levels .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (physical server) include one or more of resources included in public cloud , resources included in community cloud (physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Application Performance Management In Virtualized Server Environments . As businesses have grown , so has the need to deploy I/T applications rapidly to support the expanding business processes . Often , this growth was achieved In an unplanned way : each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased . In many cases this has led to what Is often referred to its "server sprawl" , resulting in low server utilization and high system management costs . An architectural approach that Is becoming Increasingly popular to address this problem is known as server virtualization . In this paper we introduce the concept of server consolidation using virtualization and point out associated Issues that arise In the area or application performance . We show how some of these problems can be solved by monitoring key performance metrics and using lite data to trigger migration of Virtual Machines within physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s . The algorithms we Present attempt to minimize the cost of migration and maintain acceptable application performance levels .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the second resource management scheme (key performance) comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
Application Performance Management In Virtualized Server Environments . As businesses have grown , so has the need to deploy I/T applications rapidly to support the expanding business processes . Often , this growth was achieved In an unplanned way : each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased . In many cases this has led to what Is often referred to its "server sprawl" , resulting in low server utilization and high system management costs . An architectural approach that Is becoming Increasingly popular to address this problem is known as server virtualization . In this paper we introduce the concept of server consolidation using virtualization and point out associated Issues that arise In the area or application performance . We show how some of these problems can be solved by monitoring key performance (second resource management scheme) metrics and using lite data to trigger migration of Virtual Machines within physical servers . The algorithms we Present attempt to minimize the cost of migration and maintain acceptable application performance levels .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme (key performance) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources (physical server) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Application Performance Management In Virtualized Server Environments . As businesses have grown , so has the need to deploy I/T applications rapidly to support the expanding business processes . Often , this growth was achieved In an unplanned way : each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased . In many cases this has led to what Is often referred to its "server sprawl" , resulting in low server utilization and high system management costs . An architectural approach that Is becoming Increasingly popular to address this problem is known as server virtualization . In this paper we introduce the concept of server consolidation using virtualization and point out associated Issues that arise In the area or application performance . We show how some of these problems can be solved by monitoring key performance (second resource management scheme) metrics and using lite data to trigger migration of Virtual Machines within physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s . The algorithms we Present attempt to minimize the cost of migration and maintain acceptable application performance levels .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (physical server) include one or more of resources included in public cloud , resources included in community cloud (physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Application Performance Management In Virtualized Server Environments . As businesses have grown , so has the need to deploy I/T applications rapidly to support the expanding business processes . Often , this growth was achieved In an unplanned way : each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased . In many cases this has led to what Is often referred to its "server sprawl" , resulting in low server utilization and high system management costs . An architectural approach that Is becoming Increasingly popular to address this problem is known as server virtualization . In this paper we introduce the concept of server consolidation using virtualization and point out associated Issues that arise In the area or application performance . We show how some of these problems can be solved by monitoring key performance metrics and using lite data to trigger migration of Virtual Machines within physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s . The algorithms we Present attempt to minimize the cost of migration and maintain acceptable application performance levels .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme (key performance) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources (physical server) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Application Performance Management In Virtualized Server Environments . As businesses have grown , so has the need to deploy I/T applications rapidly to support the expanding business processes . Often , this growth was achieved In an unplanned way : each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased . In many cases this has led to what Is often referred to its "server sprawl" , resulting in low server utilization and high system management costs . An architectural approach that Is becoming Increasingly popular to address this problem is known as server virtualization . In this paper we introduce the concept of server consolidation using virtualization and point out associated Issues that arise In the area or application performance . We show how some of these problems can be solved by monitoring key performance (second resource management scheme) metrics and using lite data to trigger migration of Virtual Machines within physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s . The algorithms we Present attempt to minimize the cost of migration and maintain acceptable application performance levels .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (physical server) include one or more of resources included in public cloud , resources included in community cloud (physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Application Performance Management In Virtualized Server Environments . As businesses have grown , so has the need to deploy I/T applications rapidly to support the expanding business processes . Often , this growth was achieved In an unplanned way : each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased . In many cases this has led to what Is often referred to its "server sprawl" , resulting in low server utilization and high system management costs . An architectural approach that Is becoming Increasingly popular to address this problem is known as server virtualization . In this paper we introduce the concept of server consolidation using virtualization and point out associated Issues that arise In the area or application performance . We show how some of these problems can be solved by monitoring key performance metrics and using lite data to trigger migration of Virtual Machines within physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s . The algorithms we Present attempt to minimize the cost of migration and maintain acceptable application performance levels .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
Twenty-Second IEEE/Thirteenth NASA Goddard Conference On Mass Storage Systems And Technologies, Proceedings. : 118-127 2005

Publication Year: 2005

Storage-based Intrusion Detection For Storage Area Networks (SANs)

IBM Research

Banikazemi, Poff, Abali, Ieee Computer Society
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (time copy, N storage) environment , comprising : determining a consumption rate of cloud resources (storage system) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy operation provided by SAN storage devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy operation provided by SAN storage devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy operation provided by SAN storage devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud (time copy, N storage) , resources included in community cloud (time copy, N storage) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy operation provided by SAN storage devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (time copy, N storage) resource manager to : determine a consumption rate of cloud resources (storage system) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (time copy, N storage) resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using a low inter-reference recency set (LIRS) replacement scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (time copy, N storage) resource manager to use LIRS based processor usage tracking .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage systems see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (time copy, N storage) resource manager to use LIRS based memory usage tracking .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage systems see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud (time copy, N storage) , resources included in community cloud (time copy, N storage) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (time copy, N storage) resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using least recently used (LRU) replacement scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (time copy, N storage) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (storage system) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (time copy, N storage) resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using a low inter-reference recency set (LIRS) replacement scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (time copy, N storage) resource manager to use LIRS based processor usage tracking .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage systems see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (time copy, N storage) resource manager to use LIRS based memory usage tracking .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage systems see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud (time copy, N storage) , resources included in community cloud (time copy, N storage) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (time copy, N storage) resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using least recently used (LRU) replacement scheme .
Storage-based Intrusion Detection For Storage Area Networks (SANs) . Storage systems are the next frontier for providing protection against intrusion . Since storage system (cloud resources) s see changes to persistent data , several types of intrusions can be detected by storage systems . Intrusion detection (ID) techniques can be deployed in various storage systems . In this paper , we study how intrusions can be detected at the block storage level and in SAN environments . We propose novel approaches for storage-based intrusion detection and discuss how features of state-of-the-art block storage systems can be used for intrusion detection and recovery of compromised data . In particular we present two prototype systems . First we present a real time intrusion detection system (IDS) which has been integrated within a storage management and virtualization system . In this system incoming requests for storage blocks are examined for signs of intrusions in real time . We then discuss how intrusion detection schemes can be deployed as an appliance loosely coupled with a SAN storage system . The major advantage of this approach is that it does not require any modification and enhancement to the storage system software . In this approach , we use the space and time efficient point-in-time copy (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) operation provided by SAN storage (cloud computing, public cloud, cloud computing environment, community cloud, cloud computing resource manager) devices . We also present performance results showing that the impact of ID on the overall storage system performance is negligible . Recovering data in compromised systems is also discussed .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
ADVANCES IN GRID COMPUTING - EGC 2005. 3470: 786-795 2005

Publication Year: 2005

The Gridkit Distributed Resource Management Framework

The University of Lancaster

Cai, Coulson, Grace, Blair, Mathy, Yeung, Sloot, Hoekstra, Priol, Reinefeld, Bubak
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (resource requirements) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (resource requirements) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (resource requirements) tracking .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (resource availability) , Internet resources , or resources included in virtual private networks (VPNs) .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability (hybrid cloud) . As a key contribution , the notion of tasks enables resource requirements to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (resource requirements) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (resource requirements) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (resource requirements) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (resource requirements) resource manager to use LIRS based processor usage tracking .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (resource requirements) resource manager to use LIRS based memory usage (resource requirements) tracking .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (resource availability) , Internet resources , or resources included in virtual private networks (VPNs) .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability (hybrid cloud) . As a key contribution , the notion of tasks enables resource requirements to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (resource requirements) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (resource requirements) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (resource requirements) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (resource requirements) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (resource requirements) resource manager to use LIRS based processor usage tracking .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (resource requirements) resource manager to use LIRS based memory usage (resource requirements) tracking .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (resource availability) , Internet resources , or resources included in virtual private networks (VPNs) .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability (hybrid cloud) . As a key contribution , the notion of tasks enables resource requirements to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (resource requirements) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
The Gridkit Distributed Resource Management Framework . Traditionally , distributed resource management/scheduling systems for the Grid (e . g . Globus/GRAM/Condor-G) have tended to deal with coarse-grained and concrete resource types (e . g . compute nodes and disks) , to be statically configured and non-extensible , and to be non-adaptive at runtime . In this paper , we present a new resource management framework that tries to overcome these limitations . The framework , which is part of our 'Gridkit' middleware platform , uniformly accommodates an extensible set of resource types that may be both fine-grained (such as threads and TCP/IP connections) , and abstract (i . e . represent application-level concepts such as matrix containers) . In addition , it is highly configurable and extensible in terms of pluggable strategies , and supports flexible runtime adaptation to fluctuating application demand and resource availability . As a key contribution , the notion of tasks enables resource requirements (cloud computing, memory usage) to be expressed orthogonally to the structure of the application , allowing intuitive application-level QoS/resource specification , highly flexible mappings of applications to available distributed infrastructures , and also facilitates autonomic adaptation .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING. 64 (9): 1069-1085 SEP 2004

Publication Year: 2004

STICS: SCSI-to-IP Cache For Storage Area Networks

Tennessee Technological University (Tennessee Tech), The University of Rhode Island (URI)

He, Zhang, Yang
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area (memory usage) network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (processing unit) .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit (processor usage tracking) with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (storage area) tracking .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area (memory usage) network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services (Internet resources) . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area (memory usage) network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (processing unit) .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit (processor usage tracking) with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area (memory usage) network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services (Internet resources) . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (storage area) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area (memory usage) network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (processing unit) .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit (processor usage tracking) with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area (memory usage) network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
STICS : SCSI-to-IP Cache For Storage Area Networks . Data storage plays an essential role in today's fast-growing data-intensive network services (Internet resources) . New standards and products emerge very rapidly for networked data storage . Given the mature Internet infrastruc Lure , the overwhelming preference among the IT community recently is using IP for storage networking because of economy and convenience . iSCSI is one of the most recent standards that allow SCSI protocols to be carried out over IP networks . However , there are many disparities between SCSI and IP in terms of protocols , speeds , bandwidths . data unit sizes , and design considerations that prevent fast and efficient deployment of storage area network (SAN) over IP . This paper introduces SCSI-to-IP cache storage (STICS) , a novel storage architecture that couples reliable and high-speed data caching with low-overhead conversion between SCSI and IP protocols . A STICS block consists of one or several storage devices and an intelligent processing unit with CPU and RAM . The storage devices are used to cache and store data while the intelligent processing unit carries out the caching algorithm , protocol conversion , and self-management functions . Through the efficient caching algorithm and localization of certain unnecessary protocol overheads , STlCS can significantly improve performance , reliability , and scalability over current iSCSI systems . Furthermore , STICS can be used as a basic plug-and-play building block for data storage over IR Analogous to "cache memory" invented several decades ago for bridging the speed gap between CPU and memory , STICS is the first-ever "cache storage" for bridging the pp between SCSI and IP making it possible to build an efficient SAN over IR We have implemented software STICS prototype on Linux operating system . Numerical results using popular benchmarks such as vxbench , IOzone , PostMark , and EMC's trace have shown a dramatic performance pin over the current iSCSI implementation . (C) 2004 Elsevier Inc . All ri-hts reserved .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
2004 12TH IEEE INTERNATIONAL CONFERENCE ON NETWORKS, VOLS 1 AND 2 , PROCEEDINGS. : 48-52 2004

Publication Year: 2004

Design And Development Of Ethernet-based Storage Area Network Protocol

The Data Storage Institute (DSI) Singapore

Wang, Yeo, Zhu, Chong, Pung, Busunglee, Tham, Kuttan
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Design And Development Of Ethernet-based Storage Area Network Protocol . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed for transmission of SCSI (Small Computer System Interface) protocol data across Ethernet and can provide relatively high performance with simplicity and low cost . It is demonstrated through experiment results that HyperSCSI is capable of using an existing Ethernet-based network infrastructure , common-off-the-shelf hardware and well-established storage technologies and turning them into a high performance and reliable network storage solution . Another objective of this work is to provide a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (storage area) tracking .
Design And Development Of Ethernet-based Storage Area Network Protocol . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed for transmission of SCSI (Small Computer System Interface) protocol data across Ethernet and can provide relatively high performance with simplicity and low cost . It is demonstrated through experiment results that HyperSCSI is capable of using an existing Ethernet-based network infrastructure , common-off-the-shelf hardware and well-established storage technologies and turning them into a high performance and reliable network storage solution . Another objective of this work is to provide a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Design And Development Of Ethernet-based Storage Area Network Protocol . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed for transmission of SCSI (Small Computer System Interface) protocol data across Ethernet and can provide relatively high performance with simplicity and low cost . It is demonstrated through experiment results that HyperSCSI is capable of using an existing Ethernet-based network infrastructure , common-off-the-shelf hardware and well-established storage technologies and turning them into a high performance and reliable network storage solution . Another objective of this work is to provide a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
Design And Development Of Ethernet-based Storage Area Network Protocol . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed for transmission of SCSI (Small Computer System Interface) protocol data across Ethernet and can provide relatively high performance with simplicity and low cost . It is demonstrated through experiment results that HyperSCSI is capable of using an existing Ethernet-based network infrastructure , common-off-the-shelf hardware and well-established storage technologies and turning them into a high performance and reliable network storage solution . Another objective of this work is to provide a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate (network protocol) , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (storage area) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Design And Development Of Ethernet-based Storage Area Network Protocol . In this paper , we present an Ethernet-based storage area (memory usage) network protocol (memory consumption rate) , called HyperSCSI . It is designed for transmission of SCSI (Small Computer System Interface) protocol data across Ethernet and can provide relatively high performance with simplicity and low cost . It is demonstrated through experiment results that HyperSCSI is capable of using an existing Ethernet-based network infrastructure , common-off-the-shelf hardware and well-established storage technologies and turning them into a high performance and reliable network storage solution . Another objective of this work is to provide a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
Design And Development Of Ethernet-based Storage Area Network Protocol . In this paper , we present an Ethernet-based storage area (memory usage) network protocol , called HyperSCSI . It is designed for transmission of SCSI (Small Computer System Interface) protocol data across Ethernet and can provide relatively high performance with simplicity and low cost . It is demonstrated through experiment results that HyperSCSI is capable of using an existing Ethernet-based network infrastructure , common-off-the-shelf hardware and well-established storage technologies and turning them into a high performance and reliable network storage solution . Another objective of this work is to provide a simple and efficient environment for investigating network storage characteristic and building new type of SAN over generic network infrastructure .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT. 534 (1-2): 29-32 NOV 21 2004

Publication Year: 2004

First Experiences With Large SAN Storage And Linux

Forschungszentrum Karlsruhe, Institute for Scientific Computing, P.O. Box 36 40, 76021 Karlsruhe, Germany

Van Wezel, Marten, Verstege, Jaeger
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources (storage system) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area (memory usage) network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the LIRS replacement scheme comprises using LIRS based memory usage (storage area) tracking .
First Experiences With Large SAN Storage And Linux . The use of a storage area (memory usage) network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (storage system) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area (memory usage) network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using a low inter-reference recency set (LIRS) replacement scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
First Experiences With Large SAN Storage And Linux . The use of a storage area (memory usage) network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using least recently used (LRU) replacement scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (storage system) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (storage area) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area (memory usage) network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using a low inter-reference recency set (LIRS) replacement scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
First Experiences With Large SAN Storage And Linux . The use of a storage area (memory usage) network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using least recently used (LRU) replacement scheme .
First Experiences With Large SAN Storage And Linux . The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing . The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs . This article describes the design , implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes . Presented are some throughput measurements of one of the largest Linux-based parallel storage system (cloud resources) s in the world . (C) 2004 Elsevier B . V . All rights reserved .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
IBM SYSTEMS JOURNAL. 42 (1): 29-37 2003

Publication Year: 2003

Dynamic Reconfiguration: Basic Building Blocks For Autonomic Computing On IBM PSeries Servers

International Business Machines Corporation, IBM Server Group

Jann, Browning, Burugula
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (load demand) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demand (memory usage) s . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (overall performance) .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance (replacement scheme) optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demands . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme (overall performance) comprises using LIRS based processor usage tracking .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance (replacement scheme) optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demands . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme (overall performance) comprises using LIRS based memory usage (load demand) tracking .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance (replacement scheme) optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demand (memory usage) s . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (overall performance) .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance (replacement scheme) optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demands . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (load demand) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demand (memory usage) s . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (overall performance) .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance (replacement scheme) optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demands . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (load demand) tracking .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demand (memory usage) s . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (overall performance) .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance (replacement scheme) optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demands . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (load demand) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demand (memory usage) s . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (overall performance) .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance (replacement scheme) optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demands . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (load demand) tracking .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demand (memory usage) s . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (overall performance) .
Dynamic Reconfiguration : Basic Building Blocks For Autonomic Computing On IBM PSeries Servers . A logical partition in an IBM pSeries(TM) symmetric multiprocessor (SMP) system is a subset of the hardware of the SMP that can host an operating system (OS) instance . Dynamic reconfiguration (DR) on these logically partitioned servers enables the movement of the hardware resources (such as processors , memory , and I/O slots) from one logical partition to another without requiring reboots . This capability also enables an automatically move hardware resources to a needy OS instance nondisruptively . Today , as SMPs and uniform memory access (NUMA) systems become larger and larger , the ability to run several instances of an operating system(s) on a given hardware system , so that each OS instance plus its subsystems scale or perform well , has the advantage of an optimal aggregate performance , which can translate into cost savings for customers . Though static partitioning provides a solution to this overall performance (replacement scheme) optimization problem , DR enables an improved solution by providing the capability to dynamically move hardware resources to a needy OS instance in a timely fashion to match workload demands . Hence , DR capabilities serve as key building blocks for workload managers to provide self-users in this age of rapid growth in Web servers on the internet .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
USENIX ASSOCIATION PROCEEDINGS OF THE FAST 02 CONFERENCE ON FILE AND STORAGE TECHNOLOGIES. : 189-201 2002

Publication Year: 2002

Selecting RAID Levels For Disk Arrays

Hewlett Packard Labs

Anderson, Swaminathan, Veitch, Alvarez, Wilkes, Usenix, Usenix, Usenix
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources (storage system) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (storage system) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using a low inter-reference recency set (LIRS) replacement scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using least recently used (LRU) replacement scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (storage system) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (logical unit) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical unit (I/O access rate) s . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using a low inter-reference recency set (LIRS) replacement scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using least recently used (LRU) replacement scheme .
Selecting RAID Levels For Disk Arrays . Disk arrays have a myriad of configuration parameters that interact in counter-intuitive ways , and those interactions can have significant impacts on cost , performance , and reliability . Even after values for these parameters have been chosen , there are exponentially-many ways to map data onto the disk arrays' logical units . Meanwhile , the importance of correct choices is increasing : storage system (cloud resources) s represent an growing fraction of total system cost , they need to respond more rapidly to changing needs , and there is less and less tolerance for mistakes . We believe that automatic design and configuration of storage systems is the only viable solution to these issues . To that end , we present a comparative study of a range of techniques for programmatically choosing the RAID levels to use in a disk array . Our simplest approaches are modeled on existing , manual rules of thumb : they "tag" data with a RAID level before determining the configuration of the array to which it is assigned . Our best approach simultaneously determines the RAID levels for the data , the array configuration , and the layout of data on that array . It operates as an optimization process with the twin goals of minimizing array cost while ensuring that storage workload performance requirements will be met . This approach produces robust solutions with an average cost/performance 14-17% better than the best results for the tagging schemes , and up to 150-200% better than their worst solutions . We believe that this is the first presentation and systematic analysis of a variety of novel , fully-automatic RAID-level selection techniques .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN102402458A

Filed: 2011-10-09     Issued: 2012-04-04

具有非对称处理器核的系统上的虚拟机和/或多级调度支持

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

A·贾亚莫汉
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (一种计算设备) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (的使用) , memory usage (的使用) , or input/output (I/O) access rates (可用的) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的 (access rates) 多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (的使用) tracking .
CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (的使用) tracking .
CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (一种计算设备) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (的使用) , memory usage (的使用) , or I/O access rates (可用的) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的 (access rates) 多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算设备) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算设备) resource manager to use LIRS based processor usage (的使用) tracking .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算设备) resource manager to use LIRS based memory usage (的使用) tracking .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算设备) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (一种计算设备) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (的使用) , memory usage (的使用) , I/O access rates (可用的) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的 (access rates) 多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算设备) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算设备) resource manager to use LIRS based processor usage (的使用) tracking .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算设备) resource manager to use LIRS based memory usage (的使用) tracking .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。

CN102402458A
CLAIM 6
. 如权利要求5所述的计算设备,其特征在于,确定所述一组一个或多个特征还要确定如果对所述多个物理核中的一个或多个所支持的一个或多个特征中的特定一个特征的使用 (processor usage, memory usage, processor usage tracking) 指示超过阈值量时所述特定一个特征不被包括在所述一组一个或多个特征中。

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算设备) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN102402458A
CLAIM 1
. 一种计算设备 (cloud computing, cloud computing resource manager) ,包括: 一个或多个处理器(802);一个或多个其上存储有多个指令的计算机可读介质(804),所述指令在被所述一个或多个处理器执行时使得所述一个或多个处理器执行以下动作:为所述一个或多个处理器的多个物理处理器核中的每一个标识(40¾所述物理处理器核所支持的一个或多个特征;标识(404)所述计算设备的虚拟机的虚拟处理器核的数量;至少部分地基于所述多个物理处理器核中的每一个所支持的一个或多个特征以及所述虚拟机的虚拟处理器核的数量来确定(406)要使其对所述虚拟机的虚拟处理器核可用的多个物理处理器核的一组一个或多个特征;以及使得(408)所述多个物理核的所述一组一个或多个特征对所述虚拟机的虚拟处理器核可用。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20110302321A1

Filed: 2011-06-16     Issued: 2011-12-08

Data redirection system and method therefor

(Original Assignee) Circadence Corp     (Current Assignee) Sons Of Innovation LLC

Mark Vange, Marc Plumb, Michael Kouts, Glenn Sydney Wilson
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (network resource, remote server) , comprising : determining a consumption rate of cloud resources (network resource, remote server) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 15
. A method for redirecting data comprising : receiving at a first redirector , a first request (first resource) for network services through a first communication link , the network services provided by a network device ;
sending from the first redirector , a network address of a second redirector in response to the first request for the network services ;
selecting a second communication link from a plurality of second communication links that support communication with the network device , the second communication link provided by a gateway ;
receiving at the second redirector , a second request for the network services , the second request requesting the same network services as the first request ;
sending from the second redirector , a network address of the gateway in response to the second request for the network services ;
and providing the network services by transferring data through the selected second communication link .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource, remote server) using the first resource (first request) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 15
. A method for redirecting data comprising : receiving at a first redirector , a first request (first resource) for network services through a first communication link , the network services provided by a network device ;
sending from the first redirector , a network address of a second redirector in response to the first request for the network services ;
selecting a second communication link from a plurality of second communication links that support communication with the network device , the second communication link provided by a gateway ;
receiving at the second redirector , a second request for the network services , the second request requesting the same network services as the first request ;
sending from the second redirector , a network address of the gateway in response to the second request for the network services ;
and providing the network services by transferring data through the selected second communication link .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource, remote server) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource, remote server) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (network resource, remote server) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 15
. A method for redirecting data comprising : receiving at a first redirector , a first request for network services (Internet resources) through a first communication link , the network services provided by a network device ;
sending from the first redirector , a network address of a second redirector in response to the first request for the network services ;
selecting a second communication link from a plurality of second communication links that support communication with the network device , the second communication link provided by a gateway ;
receiving at the second redirector , a second request for the network services , the second request requesting the same network services as the first request ;
sending from the second redirector , a network address of the gateway in response to the second request for the network services ;
and providing the network services by transferring data through the selected second communication link .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource, remote server) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (network resource, remote server) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (network resource, remote server) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 15
. A method for redirecting data comprising : receiving at a first redirector , a first request (first resource) for network services through a first communication link , the network services provided by a network device ;
sending from the first redirector , a network address of a second redirector in response to the first request for the network services ;
selecting a second communication link from a plurality of second communication links that support communication with the network device , the second communication link provided by a gateway ;
receiving at the second redirector , a second request for the network services , the second request requesting the same network services as the first request ;
sending from the second redirector , a network address of the gateway in response to the second request for the network services ;
and providing the network services by transferring data through the selected second communication link .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource, remote server) using a low inter-reference recency set (LIRS) replacement scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (network resource, remote server) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 15
. A method for redirecting data comprising : receiving at a first redirector , a first request for network services (Internet resources) through a first communication link , the network services provided by a network device ;
sending from the first redirector , a network address of a second redirector in response to the first request for the network services ;
selecting a second communication link from a plurality of second communication links that support communication with the network device , the second communication link provided by a gateway ;
receiving at the second redirector , a second request for the network services , the second request requesting the same network services as the first request ;
sending from the second redirector , a network address of the gateway in response to the second request for the network services ;
and providing the network services by transferring data through the selected second communication link .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource, remote server) using least recently used (LRU) replacement scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (network resource, remote server) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (network resource, remote server) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 15
. A method for redirecting data comprising : receiving at a first redirector , a first request (first resource) for network services through a first communication link , the network services provided by a network device ;
sending from the first redirector , a network address of a second redirector in response to the first request for the network services ;
selecting a second communication link from a plurality of second communication links that support communication with the network device , the second communication link provided by a gateway ;
receiving at the second redirector , a second request for the network services , the second request requesting the same network services as the first request ;
sending from the second redirector , a network address of the gateway in response to the second request for the network services ;
and providing the network services by transferring data through the selected second communication link .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource, remote server) using a low inter-reference recency set (LIRS) replacement scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (network resource, remote server) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 15
. A method for redirecting data comprising : receiving at a first redirector , a first request for network services (Internet resources) through a first communication link , the network services provided by a network device ;
sending from the first redirector , a network address of a second redirector in response to the first request for the network services ;
selecting a second communication link from a plurality of second communication links that support communication with the network device , the second communication link provided by a gateway ;
receiving at the second redirector , a second request for the network services , the second request requesting the same network services as the first request ;
sending from the second redirector , a network address of the gateway in response to the second request for the network services ;
and providing the network services by transferring data through the selected second communication link .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource, remote server) using least recently used (LRU) replacement scheme .
US20110302321A1
CLAIM 1
. A system for redirecting a client to a remote server (cloud resources, cloud computing environment, alternate cloud resources) via an enhanced communications channel comprising : a gateway configured to provide access to said remote server via at least one enhanced channel ;
one or more enhanced channels between the gateway and the remote server ;
a first redirector configured to respond to the client by redirecting the client to the gateway ;
and a second redirector configured to receive a request from the client through a non-enhanced communications channel and to redirect the request to the first redirector .

US20110302321A1
CLAIM 18
. The method of claim 15 wherein the gateway is within a first local area network and the network resource (cloud resources, cloud computing environment, alternate cloud resources) is within a second local area network remote from the first local area network .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN102156665A

Filed: 2011-04-13     Issued: 2011-08-17

一种虚拟化系统竞争资源差异化服务方法

(Original Assignee) 杭州电子科技大学     

余日泰, 蒋从锋, 徐向华, 万健, 张纪林, 殷昱煜, 任祖杰
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (使用情况, 的使用) , memory usage (使用情况, 的使用) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (使用情况, 的使用) tracking .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (使用情况, 的使用) tracking .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (使用情况, 的使用) , memory usage (使用情况, 的使用) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (使用情况, 的使用) tracking .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (使用情况, 的使用) tracking .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate (高速缓存) , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (使用情况, 的使用) , memory usage (使用情况, 的使用) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存 (CPU consumption rate) 命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (使用情况, 的使用) tracking .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (使用情况, 的使用) tracking .
CN102156665A
CLAIM 1
. 一种虚拟化系统竞争资源差异化服务方法,其特征在于该方法包括如下步骤: 步骤1 . 处理器协调器、内存协调器和磁盘协调器分别收集处理器、内存、磁盘的实时信息后发送给本地资源协调器;所述的处理器实时信息包括处理器使用率、处理器队列长度和处理器硬件性能计数器 fn息;所述的内存实时信息包括内存空间使用率、内存高速缓存命中率和内存高速缓存缺失率信息;所述的磁盘实时信息包括磁盘的读/写速率、磁盘输入输出等待对列长度、读写块大小和块数量;步骤2 . 本地资源协调器利用接收到的处理器、内存和磁盘的实时信息分别计算资源和负载的马尔科夫模型参数,计算完成后发送给全局资源协调器;步骤3 . 全局资源协调器根据所有资源的使用情况 (processor usage, memory usage, processor usage tracking) 和各个客户虚拟机的负载信息,基于响应时间的多虚拟机系统的服务质量评价方法,生成基于竞争缓解程度的竞争资源差异化服务策略;所述的竞争资源差异化服务策略具体是:将各个客户虚拟机的竞争缓解程度按从大到小的顺序排序,根据其排序结果,生成竞争资源差异化服务策略;步骤4 . 竞争资源差异化服务策略由全局资源协调器发送至本地资源协调器; 步骤5 . 本地资源协调器将竞争资源差异化服务策略分别发送至处理器协调器、内存协调器和磁盘协调器;步骤6 . 处理器协调器、内存协调器和磁盘协调器根据竞争资源差异化服务策略,对处理器、内存、磁盘资源进行分配;步骤7 . 根据客户虚拟机的性能表现,决定是否周期性循环步骤1至步骤6。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN102082692A

Filed: 2011-01-24     Issued: 2011-06-01

基于网络数据流向的虚拟机迁移方法、设备和集群系统

(Original Assignee) 华为技术有限公司     

叶川, 许建辉
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size (大小确) based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102082692A
CLAIM 5
. 根据权利要求1,2或4任一项所述的集群系统,其特征在于,所述VMM具体用于:取 样获取虚拟机列表中的虚拟机的数据包,根据所述数据包的IP信息和对应数量,统计对应 虚拟机的流入源IP地址及对应的流量数据、流出目标IP地址及对应的流量数据,并将相应 的统计结果输出给所述网络流量监控模块,以及在接收所述虚拟机部署调整模块下发的虚 拟机迁移命令后,根据所述虚拟机迁移命令将所述待迁移虚拟机迁移到对应的目标业务节 占上-所述网络流量监控模块具体用于:统计超过流量阈值的虚拟机列表,并将所述虚拟机 列表输出给所述VMM ;
根据来自所述VMM的统计结果得到数据流向记录,所述数据流向记录 包括虚拟机标识信息,流入源IP地址以及对应的流量数据、流出目标IP地址以及对应的流 量数据,所述虚拟机标识信息包括虚拟机名和虚拟机IP地址;所述虚拟机部署调整模块具体用于:向所述集群系统内的各业务节点上的网络流量监 控模块获取所述数据流向记录;对所述数据流向记录进行匹配处理,得到所述集群系统内 虚拟机之间的数据流向关系,所述集群系统内虚拟机之间的数据流向关系用流向匹配记录 表示,包括:源虚拟机标识信息、目标虚拟机标识信息及对应流量大小;并根据所述集群系 统的流向匹配记录确定满足虚拟机迁移策略的待迁移虚拟机,向其上运行有所确定的待迁 移虚拟机的VMM下发虚拟机迁移命令或者输出对应的虚拟机迁移建议;其中,所述虚拟机 迁移策略包括:虚拟机迁移数量小于等于设定阈值,和/或,根据减少的网络流量大小确 (change region size) 定 迁移顺序,和/或,满足虚拟机迁移数量最小化和减少的网络流量最大化的平衡关系,和/或,将存在数据流向关系的虚拟机迁移到同一目标业务节点,所述目标业务节点为根据集 群系统内存在数据流向关系的多个虚拟机各自宿主的业务节点的物理资源情况所确定的。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (网络数据) , or resources included in virtual private networks (VPNs) .
CN102082692A
CLAIM 14
. 一种基于网络数据 (Internet resources) 流向的虚拟机迁移方法,其特征在于,包括:向集群系统内的各业务节点收集宿主在所述业务节点上的虚拟机的网络流量信息;根据所述虚拟机的网络流量信息确定所述集群系统内虚拟机之间的数据流向关系;根据虚拟机迁移策略和所述集群系统内虚拟机之间的数据流向关系,向其上运行有所 确定的待迁移虚拟机的虚拟机监控单元VMM下发虚拟机迁移命令或者输出对应的虚拟机 迁移建议。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size (大小确) based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102082692A
CLAIM 5
. 根据权利要求1,2或4任一项所述的集群系统,其特征在于,所述VMM具体用于:取 样获取虚拟机列表中的虚拟机的数据包,根据所述数据包的IP信息和对应数量,统计对应 虚拟机的流入源IP地址及对应的流量数据、流出目标IP地址及对应的流量数据,并将相应 的统计结果输出给所述网络流量监控模块,以及在接收所述虚拟机部署调整模块下发的虚 拟机迁移命令后,根据所述虚拟机迁移命令将所述待迁移虚拟机迁移到对应的目标业务节 占上-所述网络流量监控模块具体用于:统计超过流量阈值的虚拟机列表,并将所述虚拟机 列表输出给所述VMM ;
根据来自所述VMM的统计结果得到数据流向记录,所述数据流向记录 包括虚拟机标识信息,流入源IP地址以及对应的流量数据、流出目标IP地址以及对应的流 量数据,所述虚拟机标识信息包括虚拟机名和虚拟机IP地址;所述虚拟机部署调整模块具体用于:向所述集群系统内的各业务节点上的网络流量监 控模块获取所述数据流向记录;对所述数据流向记录进行匹配处理,得到所述集群系统内 虚拟机之间的数据流向关系,所述集群系统内虚拟机之间的数据流向关系用流向匹配记录 表示,包括:源虚拟机标识信息、目标虚拟机标识信息及对应流量大小;并根据所述集群系 统的流向匹配记录确定满足虚拟机迁移策略的待迁移虚拟机,向其上运行有所确定的待迁 移虚拟机的VMM下发虚拟机迁移命令或者输出对应的虚拟机迁移建议;其中,所述虚拟机 迁移策略包括:虚拟机迁移数量小于等于设定阈值,和/或,根据减少的网络流量大小确 (change region size) 定 迁移顺序,和/或,满足虚拟机迁移数量最小化和减少的网络流量最大化的平衡关系,和/或,将存在数据流向关系的虚拟机迁移到同一目标业务节点,所述目标业务节点为根据集 群系统内存在数据流向关系的多个虚拟机各自宿主的业务节点的物理资源情况所确定的。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (网络数据) , or resources included in virtual private networks (VPNs) .
CN102082692A
CLAIM 14
. 一种基于网络数据 (Internet resources) 流向的虚拟机迁移方法,其特征在于,包括:向集群系统内的各业务节点收集宿主在所述业务节点上的虚拟机的网络流量信息;根据所述虚拟机的网络流量信息确定所述集群系统内虚拟机之间的数据流向关系;根据虚拟机迁移策略和所述集群系统内虚拟机之间的数据流向关系,向其上运行有所 确定的待迁移虚拟机的虚拟机监控单元VMM下发虚拟机迁移命令或者输出对应的虚拟机 迁移建议。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size (大小确) determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102082692A
CLAIM 5
. 根据权利要求1,2或4任一项所述的集群系统,其特征在于,所述VMM具体用于:取 样获取虚拟机列表中的虚拟机的数据包,根据所述数据包的IP信息和对应数量,统计对应 虚拟机的流入源IP地址及对应的流量数据、流出目标IP地址及对应的流量数据,并将相应 的统计结果输出给所述网络流量监控模块,以及在接收所述虚拟机部署调整模块下发的虚 拟机迁移命令后,根据所述虚拟机迁移命令将所述待迁移虚拟机迁移到对应的目标业务节 占上-所述网络流量监控模块具体用于:统计超过流量阈值的虚拟机列表,并将所述虚拟机 列表输出给所述VMM ;
根据来自所述VMM的统计结果得到数据流向记录,所述数据流向记录 包括虚拟机标识信息,流入源IP地址以及对应的流量数据、流出目标IP地址以及对应的流 量数据,所述虚拟机标识信息包括虚拟机名和虚拟机IP地址;所述虚拟机部署调整模块具体用于:向所述集群系统内的各业务节点上的网络流量监 控模块获取所述数据流向记录;对所述数据流向记录进行匹配处理,得到所述集群系统内 虚拟机之间的数据流向关系,所述集群系统内虚拟机之间的数据流向关系用流向匹配记录 表示,包括:源虚拟机标识信息、目标虚拟机标识信息及对应流量大小;并根据所述集群系 统的流向匹配记录确定满足虚拟机迁移策略的待迁移虚拟机,向其上运行有所确定的待迁 移虚拟机的VMM下发虚拟机迁移命令或者输出对应的虚拟机迁移建议;其中,所述虚拟机 迁移策略包括:虚拟机迁移数量小于等于设定阈值,和/或,根据减少的网络流量大小确 (change region size) 定 迁移顺序,和/或,满足虚拟机迁移数量最小化和减少的网络流量最大化的平衡关系,和/或,将存在数据流向关系的虚拟机迁移到同一目标业务节点,所述目标业务节点为根据集 群系统内存在数据流向关系的多个虚拟机各自宿主的业务节点的物理资源情况所确定的。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (网络数据) , or resources included in virtual private networks (VPNs) .
CN102082692A
CLAIM 14
. 一种基于网络数据 (Internet resources) 流向的虚拟机迁移方法,其特征在于,包括:向集群系统内的各业务节点收集宿主在所述业务节点上的虚拟机的网络流量信息;根据所述虚拟机的网络流量信息确定所述集群系统内虚拟机之间的数据流向关系;根据虚拟机迁移策略和所述集群系统内虚拟机之间的数据流向关系,向其上运行有所 确定的待迁移虚拟机的虚拟机监控单元VMM下发虚拟机迁移命令或者输出对应的虚拟机 迁移建议。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN102457504A

Filed: 2010-10-28     Issued: 2012-05-16

应用商店系统及使用该应用商店系统进行应用开发的方法

(Original Assignee) ZTE Corp     (Current Assignee) ZTE Corp

巫妍
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (一种使用, 的使用) , memory usage (一种使用, 的使用) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (一种使用, 的使用) tracking .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (一种使用, 的使用) tracking .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (一种资源) , or resources included in virtual private networks (VPNs) .
CN102457504A
CLAIM 1
. 一种资源 (Internet resources) 管理功能实体,其特征在于,包括:注册模块,用于接受将资源注册到所述资源管理功能实体的操作,其中,所述资源包括以下至少之一:电信能力资源、网络资源;显示模块,用于显示所述注册的资源的信息。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (一种使用, 的使用) , memory usage (一种使用, 的使用) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (一种使用, 的使用) tracking .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (一种使用, 的使用) tracking .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (一种资源) , or resources included in virtual private networks (VPNs) .
CN102457504A
CLAIM 1
. 一种资源 (Internet resources) 管理功能实体,其特征在于,包括:注册模块,用于接受将资源注册到所述资源管理功能实体的操作,其中,所述资源包括以下至少之一:电信能力资源、网络资源;显示模块,用于显示所述注册的资源的信息。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (一种使用, 的使用) , memory usage (一种使用, 的使用) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (一种使用, 的使用) tracking .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (一种使用, 的使用) tracking .
CN102457504A
CLAIM 3
. 根据权利要求1所述的资源管理功能实体,其特征在于,还包括:授权请求模块,用于响应于请求获取所述资源的使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 授权的操作,向用于提供所述资源的功能实体发送授权请求;授权响应模块,用于在接收到所述用于提供所述资源的功能实体返回的成功响应的情况下,调用所述显示模块显示用于指示授权成功的信息。

CN102457504A
CLAIM 9
. 一种使用 (processor usage, memory usage, processor usage tracking, memory usage tracking) 权利要求7或8所述的应用商店系统进行应用开发的方法,其特征在于,包括:根据所述资源管理功能实体显示的所述注册的资源的信息,控制所述应用调用所述注册的资源;将所述应用上传至所述开发者社区功能实体。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (一种资源) , or resources included in virtual private networks (VPNs) .
CN102457504A
CLAIM 1
. 一种资源 (Internet resources) 管理功能实体,其特征在于,包括:注册模块,用于接受将资源注册到所述资源管理功能实体的操作,其中,所述资源包括以下至少之一:电信能力资源、网络资源;显示模块,用于显示所述注册的资源的信息。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2011025720A1

Filed: 2010-08-22     Issued: 2011-03-03

Optimized thread scheduling via hardware performance monitoring

(Original Assignee) Advanced Micro Devices, Inc.     

William A. Moyes
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (shared resources) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (said determination) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (first thread) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2011025720A1
CLAIM 1
. A computing system comprising : one or more microprocessors comprising performance monitoring hardware ;
a memory coupled to the one or more microprocessors , wherein the memory stores a program comprising program code ;
and an operating system comprising a scheduler , wherein the scheduler is configured to : assign a plurality of software threads corresponding to the program code to a plurality of computation units ;
receive measured data values from the performance monitoring hardware as the one or more microprocessors process the software threads of the program code ;
and reassign a first thread (first resource) assigned from a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource , in response to determining from the measured data values that a first value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold .

WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

WO2011025720A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource (first thread) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
WO2011025720A1
CLAIM 1
. A computing system comprising : one or more microprocessors comprising performance monitoring hardware ;
a memory coupled to the one or more microprocessors , wherein the memory stores a program comprising program code ;
and an operating system comprising a scheduler , wherein the scheduler is configured to : assign a plurality of software threads corresponding to the program code to a plurality of computation units ;
receive measured data values from the performance monitoring hardware as the one or more microprocessors process the software threads of the program code ;
and reassign a first thread (first resource) assigned from a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource , in response to determining from the measured data values that a first value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (said determination) tracking .
WO2011025720A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (shared resources) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (said determination) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (first thread) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2011025720A1
CLAIM 1
. A computing system comprising : one or more microprocessors comprising performance monitoring hardware ;
a memory coupled to the one or more microprocessors , wherein the memory stores a program comprising program code ;
and an operating system comprising a scheduler , wherein the scheduler is configured to : assign a plurality of software threads corresponding to the program code to a plurality of computation units ;
receive measured data values from the performance monitoring hardware as the one or more microprocessors process the software threads of the program code ;
and reassign a first thread (first resource) assigned from a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource , in response to determining from the measured data values that a first value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold .

WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

WO2011025720A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based processor usage tracking .
WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based memory usage (said determination) tracking .
WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

WO2011025720A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (shared resources) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (shared resources) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (first thread) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (said determination) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2011025720A1
CLAIM 1
. A computing system comprising : one or more microprocessors comprising performance monitoring hardware ;
a memory coupled to the one or more microprocessors , wherein the memory stores a program comprising program code ;
and an operating system comprising a scheduler , wherein the scheduler is configured to : assign a plurality of software threads corresponding to the program code to a plurality of computation units ;
receive measured data values from the performance monitoring hardware as the one or more microprocessors process the software threads of the program code ;
and reassign a first thread (first resource) assigned from a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource , in response to determining from the measured data values that a first value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold .

WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

WO2011025720A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based processor usage tracking .
WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based memory usage (said determination) tracking .
WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

WO2011025720A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
WO2011025720A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20110289555A1

Filed: 2010-05-18     Issued: 2011-11-24

Mechanism for Utilization of Virtual Machines by a Community Cloud

(Original Assignee) Red Hat Inc     (Current Assignee) Red Hat Inc

Greg DeKoenigsberg, Mike McGrath
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines, cloud services) , comprising : determining a consumption rate of cloud resources (virtual machines, cloud services) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates (end user) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 4
. The method of claim 1 , wherein an end user (access rates) of the workstation is not able to access the operational state of the VM .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines, cloud services) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines, cloud services) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines, cloud services) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines, cloud services) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines, cloud services) , or resources included in virtual private networks (VPNs) .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines, cloud services) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (virtual machines, cloud services) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates (end user) for the one or more virtual machines in a cloud computing environment (virtual machines, cloud services) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 4
. The method of claim 1 , wherein an end user (access rates) of the workstation is not able to access the operational state of the VM .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines, cloud services) using a low inter-reference recency set (LIRS) replacement scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines, cloud services) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines, cloud services) , or resources included in virtual private networks (VPNs) .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines, cloud services) using least recently used (LRU) replacement scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines, cloud services) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (virtual machines, cloud services) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (end user) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 4
. The method of claim 1 , wherein an end user (access rates) of the workstation is not able to access the operational state of the VM .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines, cloud services) using a low inter-reference recency set (LIRS) replacement scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines, cloud services) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines, cloud services) , or resources included in virtual private networks (VPNs) .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines, cloud services) using least recently used (LRU) replacement scheme .
US20110289555A1
CLAIM 1
. A computer-implemented method , comprising : authenticating , by a central administrative computing device , a virtual machine (VM) to be joined to a cloud environment managed by the central administrative computing device as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
updating , by the central administrative computing device , a database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and providing , by the central administrative computing device , instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .

US20110289555A1
CLAIM 9
. A system , comprising : a processor ;
a memory communicably coupled to the processor ;
a database of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) utilized as cloud computing resources ;
and a cloud environment administrative component enabled by the processor and the memory , the cloud environment administrative component operable to : authenticate a virtual machine (VM) to be joined to a cloud environment managed by the cloud environment administrative component as a cloud computing resource , wherein the VM is operating on a workstation that is not a dedicated cloud computing resource ;
update the database of VMs utilized as cloud computing resources with information of the VM related to its operational status ;
and provide instructions for the VM to operate as a cloud computing resource , the instructions based on current demand for cloud services of the cloud environment and an overall current supply of cloud computing resources presently available in the cloud environment .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN102238204A

Filed: 2010-04-23     Issued: 2011-11-09

网络数据的获取方法和系统

(Original Assignee) Tencent Technology Shenzhen Co Ltd     (Current Assignee) Tencent Technology Shenzhen Co Ltd

郑志昊, 田宇红, 郭伟, 周路明, 石玉磊, 高宇鹏
US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (包括控制) .
CN102238204A
CLAIM 14
. 根据权利要求10所述的网络数据的获取系统,其特征在于,还包括用于输入网络访问请求的输入模块,所述处理模块还包括控制 (processor usage tracking) 单元,所述控制单元与所述输入模块及存储模块相连,根据所述网络访问请求判断所述存储模块是否存储有通过自动下载保存的请求的数据,如果所述存储模块存储有请求的数据,则读取存储模块中存储的请求的数据;如果所述存储模块未存储请求的数据,则指令所述下载单元下载请求的数据。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN102238204A
CLAIM 17
. 根据权利要求10或11或12或14或15或16所述的网络数据的获取系统,其特征在于,所述检测单元定时检测网络连接状况,所述判断单元还用于在网络 (community cloud) 连接状况不满足预定条件时进一步判断后台自动下载功能是否开启,如果后台自动下载功能已经开启,则指令所述下载单元关闭后台自动下载功能。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (包括控制) .
CN102238204A
CLAIM 14
. 根据权利要求10所述的网络数据的获取系统,其特征在于,还包括用于输入网络访问请求的输入模块,所述处理模块还包括控制 (processor usage tracking) 单元,所述控制单元与所述输入模块及存储模块相连,根据所述网络访问请求判断所述存储模块是否存储有通过自动下载保存的请求的数据,如果所述存储模块存储有请求的数据,则读取存储模块中存储的请求的数据;如果所述存储模块未存储请求的数据,则指令所述下载单元下载请求的数据。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN102238204A
CLAIM 17
. 根据权利要求10或11或12或14或15或16所述的网络数据的获取系统,其特征在于,所述检测单元定时检测网络连接状况,所述判断单元还用于在网络 (community cloud) 连接状况不满足预定条件时进一步判断后台自动下载功能是否开启,如果后台自动下载功能已经开启,则指令所述下载单元关闭后台自动下载功能。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (包括控制) .
CN102238204A
CLAIM 14
. 根据权利要求10所述的网络数据的获取系统,其特征在于,还包括用于输入网络访问请求的输入模块,所述处理模块还包括控制 (processor usage tracking) 单元,所述控制单元与所述输入模块及存储模块相连,根据所述网络访问请求判断所述存储模块是否存储有通过自动下载保存的请求的数据,如果所述存储模块存储有请求的数据,则读取存储模块中存储的请求的数据;如果所述存储模块未存储请求的数据,则指令所述下载单元下载请求的数据。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN102238204A
CLAIM 17
. 根据权利要求10或11或12或14或15或16所述的网络数据的获取系统,其特征在于,所述检测单元定时检测网络连接状况,所述判断单元还用于在网络 (community cloud) 连接状况不满足预定条件时进一步判断后台自动下载功能是否开启,如果后台自动下载功能已经开启,则指令所述下载单元关闭后台自动下载功能。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN102379107A

Filed: 2010-04-14     Issued: 2012-03-14

用于提供对设备到设备的通信可用性的指示的方法、装置和计算机程序产品

(Original Assignee) Nokia Oyj     (Current Assignee) Nokia Technologies Oy

K·F·多普勒, M·P·O·里纳
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (一种计算) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (计算机程序产品) .
CN102379107A
CLAIM 15
. 一种计算机程序产品 (processor usage tracking) ,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (一种计算) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking (计算机程序产品) .
CN102379107A
CLAIM 15
. 一种计算机程序产品 (processor usage tracking) ,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage tracking .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (一种计算) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking (计算机程序产品) .
CN102379107A
CLAIM 15
. 一种计算机程序产品 (processor usage tracking) ,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage tracking .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN102379107A
CLAIM 15
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括至少一个计算机可读存储介质,所述计算机可读存储介质具有存储于其中的计算机可执行程序代码部分,所述计算机可执行程序代码部分包括:用于接收与应用的状态有关的指示或者所述应用的请求的程序代码指令;用于确定用于对等通信的设备到设备的连接的可用性的程序代码指令;以及用于经由处理器向所述应用提供通知的程序代码指令,所述通知指示与对等方的所述设备到设备的连接的可用性。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN102395938A

Filed: 2010-02-04     Issued: 2012-03-28

电源和数据中心控制

(Original Assignee) American Power Conversion Corp     (Current Assignee) Schneider Electric IT Corp

卡尔·爱德华·马丁·萨科
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (一种计算) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (少一个数据) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102395938A
CLAIM 9
. 如权利要求1所述的方法,包括:根据所述参数、所述估计参数和所述估计电力分配中的至少一个来确定至少一个数据 (memory usage) 中心组件的估计寿命;以及根据所述估计寿命生成数据中心维护计划表。

CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (少一个数据) tracking .
CN102395938A
CLAIM 9
. 如权利要求1所述的方法,包括:根据所述参数、所述估计参数和所述估计电力分配中的至少一个来确定至少一个数据 (memory usage) 中心组件的估计寿命;以及根据所述估计寿命生成数据中心维护计划表。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (一种计算) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (少一个数据) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102395938A
CLAIM 9
. 如权利要求1所述的方法,包括:根据所述参数、所述估计参数和所述估计电力分配中的至少一个来确定至少一个数据 (memory usage) 中心组件的估计寿命;以及根据所述估计寿命生成数据中心维护计划表。

CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking .
CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage (少一个数据) tracking .
CN102395938A
CLAIM 9
. 如权利要求1所述的方法,包括:根据所述参数、所述估计参数和所述估计电力分配中的至少一个来确定至少一个数据 (memory usage) 中心组件的估计寿命;以及根据所述估计寿命生成数据中心维护计划表。

CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (一种计算) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (少一个数据) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102395938A
CLAIM 9
. 如权利要求1所述的方法,包括:根据所述参数、所述估计参数和所述估计电力分配中的至少一个来确定至少一个数据 (memory usage) 中心组件的估计寿命;以及根据所述估计寿命生成数据中心维护计划表。

CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking .
CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage (少一个数据) tracking .
CN102395938A
CLAIM 9
. 如权利要求1所述的方法,包括:根据所述参数、所述估计参数和所述估计电力分配中的至少一个来确定至少一个数据 (memory usage) 中心组件的估计寿命;以及根据所述估计寿命生成数据中心维护计划表。

CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN102395938A
CLAIM 27
. 一种计算 (cloud computing, cloud computing resource manager) 机可读介质,其具有存储在其上的指令序列,所述指令序列包括指令,所述指令在由处理器执行时促使所述处理器:识别多个服务器中的至少一个服务器的参数,所述多个服务器形成数据中心的至少一部分;识别虚拟服务器的估计参数;确定从不间断电源到所述多个服务器的估计电力分配;至少部分地根据所述参数、所述估计参数和所述估计电力分配来选择所述多个服务器中的一个服务器;向所选择的服务器提供所述虚拟服务器;以及调整所述不间断电源以调整提供给所述多个服务器中的至少一个服务器的电力。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN102163072A

Filed: 2009-12-08     Issued: 2011-08-24

用于节能的基于软件的线程重映射

(Original Assignee) 英特尔公司     

J·J·宋
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (一种计算) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (计算机程序产品) .
CN102163072A
CLAIM 20
. 一种计算机程序产品 (processor usage tracking) ,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (上下文切) , Internet resources , or resources included in virtual private networks (VPNs) .
CN102163072A
CLAIM 11
. 如权利要求9所述的方法,其特征在于,所述交换还包括执行基于软件的上下文切 (hybrid cloud) 换。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (一种计算) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking (计算机程序产品) .
CN102163072A
CLAIM 20
. 一种计算机程序产品 (processor usage tracking) ,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage tracking .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (上下文切) , Internet resources , or resources included in virtual private networks (VPNs) .
CN102163072A
CLAIM 11
. 如权利要求9所述的方法,其特征在于,所述交换还包括执行基于软件的上下文切 (hybrid cloud) 换。

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (一种计算) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking (计算机程序产品) .
CN102163072A
CLAIM 20
. 一种计算机程序产品 (processor usage tracking) ,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage tracking .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (上下文切) , Internet resources , or resources included in virtual private networks (VPNs) .
CN102163072A
CLAIM 11
. 如权利要求9所述的方法,其特征在于,所述交换还包括执行基于软件的上下文切 (hybrid cloud) 换。

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN102163072A
CLAIM 20
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,包括具有计算机可读出程序代码嵌入其中的计算机可使用介质,所述计算机可读出程序代码适配成执行而实现一种方法,所述方法包括:基于第一处理器核心的第一线程单元和第二处理器核心的第一线程单元的功率状态信息,从第一核心的第一线程单元向第二核心的第一线程单元交换工作;将第二核心的第一线程单元置于第一功率状态,将第一核心的第一线程单元置于第二功率状态;以及将第二核心置于第一核心功率状态。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20110055838A1

Filed: 2009-08-28     Issued: 2011-03-03

Optimized thread scheduling via hardware performance monitoring

(Original Assignee) Advanced Micro Devices Inc     (Current Assignee) Advanced Micro Devices Inc

William A. Moyes
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (shared resources) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (said determination) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (first thread) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110055838A1
CLAIM 1
. A computing system comprising : one or more microprocessors comprising performance monitoring hardware ;
a memory coupled to the one or more microprocessors , wherein the memory stores a program comprising program code ;
and an operating system comprising a scheduler , wherein the scheduler is configured to : assign a plurality of software threads corresponding to the program code to a plurality of computation units ;
receive measured data values from the performance monitoring hardware as the one or more microprocessors process the software threads of the program code ;
and reassign a first thread (first resource) assigned from a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource , in response to determining from the measured data values that a first value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold .

US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US20110055838A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource (first thread) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20110055838A1
CLAIM 1
. A computing system comprising : one or more microprocessors comprising performance monitoring hardware ;
a memory coupled to the one or more microprocessors , wherein the memory stores a program comprising program code ;
and an operating system comprising a scheduler , wherein the scheduler is configured to : assign a plurality of software threads corresponding to the program code to a plurality of computation units ;
receive measured data values from the performance monitoring hardware as the one or more microprocessors process the software threads of the program code ;
and reassign a first thread (first resource) assigned from a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource , in response to determining from the measured data values that a first value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (said determination) tracking .
US20110055838A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (shared resources) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (said determination) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (first thread) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110055838A1
CLAIM 1
. A computing system comprising : one or more microprocessors comprising performance monitoring hardware ;
a memory coupled to the one or more microprocessors , wherein the memory stores a program comprising program code ;
and an operating system comprising a scheduler , wherein the scheduler is configured to : assign a plurality of software threads corresponding to the program code to a plurality of computation units ;
receive measured data values from the performance monitoring hardware as the one or more microprocessors process the software threads of the program code ;
and reassign a first thread (first resource) assigned from a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource , in response to determining from the measured data values that a first value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold .

US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US20110055838A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based processor usage tracking .
US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based memory usage (said determination) tracking .
US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US20110055838A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (shared resources) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (shared resources) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (first thread) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (said determination) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110055838A1
CLAIM 1
. A computing system comprising : one or more microprocessors comprising performance monitoring hardware ;
a memory coupled to the one or more microprocessors , wherein the memory stores a program comprising program code ;
and an operating system comprising a scheduler , wherein the scheduler is configured to : assign a plurality of software threads corresponding to the program code to a plurality of computation units ;
receive measured data values from the performance monitoring hardware as the one or more microprocessors process the software threads of the program code ;
and reassign a first thread (first resource) assigned from a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource , in response to determining from the measured data values that a first value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold .

US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US20110055838A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based processor usage tracking .
US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based memory usage (said determination) tracking .
US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .

US20110055838A1
CLAIM 12
. The method as recited in claim 9 , further comprising storing configurable predetermined thresholds corresponding to hardware performance metrics used in said determination (memory usage, memory usage tracking) .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20110055838A1
CLAIM 7
. The computing system as recited in claim 1 , wherein the shared resources (cloud computing, cloud computing resource manager, I/O access rate) correspond to at least one of the following : a branch prediction unit , a cache , a floating-point unit , or an input/output (I/O) device .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CA2720087A1

Filed: 2009-04-09     Issued: 2009-10-15

Content delivery in a network

(Original Assignee) Level 3 Communications LLC     (Current Assignee) Level 3 Communications LLC

Jin-Gen Wang, Qing Li, Ron Munoz
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (one range) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CA2720087A1
CLAIM 11
. The method of claim 1 , wherein retrieving the content comprises generating at least one range (maximum capacity) request specifying a range of data to be retrieved , wherein the range of data is selected based on a measurement of response speed from the media access server .

CA2720087A1
CLAIM 20
. The computer program product of claim 19 , wherein retrieving the content further comprises : first request (first resource) ing at least a first portion of the content , the first portion of the content including parameters related to the content ;
notifying the media streaming server that the first portion is ready for streaming from the local cache ;
sequentially requesting remaining portions of the content ;
and if a location-specific request is received , interrupting sequential requesting of remaining portions of the content to retrieve data at a particular location specified in the location-specific request .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource (first request) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CA2720087A1
CLAIM 20
. The computer program product of claim 19 , wherein retrieving the content further comprises : first request (first resource) ing at least a first portion of the content , the first portion of the content including parameters related to the content ;
notifying the media streaming server that the first portion is ready for streaming from the local cache ;
sequentially requesting remaining portions of the content ;
and if a location-specific request is received , interrupting sequential requesting of remaining portions of the content to retrieve data at a particular location specified in the location-specific request .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (one range) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CA2720087A1
CLAIM 11
. The method of claim 1 , wherein retrieving the content comprises generating at least one range (maximum capacity) request specifying a range of data to be retrieved , wherein the range of data is selected based on a measurement of response speed from the media access server .

CA2720087A1
CLAIM 20
. The computer program product of claim 19 , wherein retrieving the content further comprises : first request (first resource) ing at least a first portion of the content , the first portion of the content including parameters related to the content ;
notifying the media streaming server that the first portion is ready for streaming from the local cache ;
sequentially requesting remaining portions of the content ;
and if a location-specific request is received , interrupting sequential requesting of remaining portions of the content to retrieve data at a particular location specified in the location-specific request .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (one range) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CA2720087A1
CLAIM 11
. The method of claim 1 , wherein retrieving the content comprises generating at least one range (maximum capacity) request specifying a range of data to be retrieved , wherein the range of data is selected based on a measurement of response speed from the media access server .

CA2720087A1
CLAIM 20
. The computer program product of claim 19 , wherein retrieving the content further comprises : first request (first resource) ing at least a first portion of the content , the first portion of the content including parameters related to the content ;
notifying the media streaming server that the first portion is ready for streaming from the local cache ;
sequentially requesting remaining portions of the content ;
and if a location-specific request is received , interrupting sequential requesting of remaining portions of the content to retrieve data at a particular location specified in the location-specific request .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101541048A

Filed: 2009-04-03     Issued: 2009-09-23

服务质量控制方法和网络设备

(Original Assignee) 华为技术有限公司     

司源
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate (比特速率) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (单元判) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101541048A
CLAIM 4
. 根据权利要求3所述的方法,其特征在于,所述根据识别出的所述用 户的业务类型为用户提供速率保障包括:为该类型的业务设定保证比特速率 (consumption rate, I/O access rate) GBR或者调整为该类型的业务的 GBR。

CN101541048A
CLAIM 15
. 根据权利要求14所述的网络设备,其特征在于,当所述用户的多个 业务中有业务的业务类型无法识别时,所述控制单元包括:流量比重判断子单 元,用于判断所述无法识别的业务在所述多个业务中的流量比重;当所述流量比重判断子单元判 (memory usage, memory usage tracking) 断出所述无法识别的业务在所述多个业务 中的流量比重大于或等于第一门限时,则认为所述用户的业务类型未知,所述 控制单元不才艮据业务类型控制所述用户的服务质量;当所述流量比重判断子单元判断出所述无法识别的业务在所述多个业务 中的流量比重小于或等于第二门限时,则所述控制单元根据已识别出的业务类 型控制所述用户的服务质量。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (单元判) tracking .
CN101541048A
CLAIM 15
. 根据权利要求14所述的网络设备,其特征在于,当所述用户的多个 业务中有业务的业务类型无法识别时,所述控制单元包括:流量比重判断子单 元,用于判断所述无法识别的业务在所述多个业务中的流量比重;当所述流量比重判断子单元判 (memory usage, memory usage tracking) 断出所述无法识别的业务在所述多个业务 中的流量比重大于或等于第一门限时,则认为所述用户的业务类型未知,所述 控制单元不才艮据业务类型控制所述用户的服务质量;当所述流量比重判断子单元判断出所述无法识别的业务在所述多个业务 中的流量比重小于或等于第二门限时,则所述控制单元根据已识别出的业务类 型控制所述用户的服务质量。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate (比特速率) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (单元判) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101541048A
CLAIM 4
. 根据权利要求3所述的方法,其特征在于,所述根据识别出的所述用 户的业务类型为用户提供速率保障包括:为该类型的业务设定保证比特速率 (consumption rate, I/O access rate) GBR或者调整为该类型的业务的 GBR。

CN101541048A
CLAIM 15
. 根据权利要求14所述的网络设备,其特征在于,当所述用户的多个 业务中有业务的业务类型无法识别时,所述控制单元包括:流量比重判断子单 元,用于判断所述无法识别的业务在所述多个业务中的流量比重;当所述流量比重判断子单元判 (memory usage, memory usage tracking) 断出所述无法识别的业务在所述多个业务 中的流量比重大于或等于第一门限时,则认为所述用户的业务类型未知,所述 控制单元不才艮据业务类型控制所述用户的服务质量;当所述流量比重判断子单元判断出所述无法识别的业务在所述多个业务 中的流量比重小于或等于第二门限时,则所述控制单元根据已识别出的业务类 型控制所述用户的服务质量。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (单元判) tracking .
CN101541048A
CLAIM 15
. 根据权利要求14所述的网络设备,其特征在于,当所述用户的多个 业务中有业务的业务类型无法识别时,所述控制单元包括:流量比重判断子单 元,用于判断所述无法识别的业务在所述多个业务中的流量比重;当所述流量比重判断子单元判 (memory usage, memory usage tracking) 断出所述无法识别的业务在所述多个业务 中的流量比重大于或等于第一门限时,则认为所述用户的业务类型未知,所述 控制单元不才艮据业务类型控制所述用户的服务质量;当所述流量比重判断子单元判断出所述无法识别的业务在所述多个业务 中的流量比重小于或等于第二门限时,则所述控制单元根据已识别出的业务类 型控制所述用户的服务质量。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate (比特速率) of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (比特速率) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (单元判) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101541048A
CLAIM 4
. 根据权利要求3所述的方法,其特征在于,所述根据识别出的所述用 户的业务类型为用户提供速率保障包括:为该类型的业务设定保证比特速率 (consumption rate, I/O access rate) GBR或者调整为该类型的业务的 GBR。

CN101541048A
CLAIM 15
. 根据权利要求14所述的网络设备,其特征在于,当所述用户的多个 业务中有业务的业务类型无法识别时,所述控制单元包括:流量比重判断子单 元,用于判断所述无法识别的业务在所述多个业务中的流量比重;当所述流量比重判断子单元判 (memory usage, memory usage tracking) 断出所述无法识别的业务在所述多个业务 中的流量比重大于或等于第一门限时,则认为所述用户的业务类型未知,所述 控制单元不才艮据业务类型控制所述用户的服务质量;当所述流量比重判断子单元判断出所述无法识别的业务在所述多个业务 中的流量比重小于或等于第二门限时,则所述控制单元根据已识别出的业务类 型控制所述用户的服务质量。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (单元判) tracking .
CN101541048A
CLAIM 15
. 根据权利要求14所述的网络设备,其特征在于,当所述用户的多个 业务中有业务的业务类型无法识别时,所述控制单元包括:流量比重判断子单 元,用于判断所述无法识别的业务在所述多个业务中的流量比重;当所述流量比重判断子单元判 (memory usage, memory usage tracking) 断出所述无法识别的业务在所述多个业务 中的流量比重大于或等于第一门限时,则认为所述用户的业务类型未知,所述 控制单元不才艮据业务类型控制所述用户的服务质量;当所述流量比重判断子单元判断出所述无法识别的业务在所述多个业务 中的流量比重小于或等于第二门限时,则所述控制单元根据已识别出的业务类 型控制所述用户的服务质量。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101978677A

Filed: 2009-03-17     Issued: 2011-02-16

带内应用认知传播的增强

(Original Assignee) 阿尔卡特朗讯公司     

A·多尔加诺, S·E·莫林, C·L·卡恩
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (少一个数据) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101978677A
CLAIM 1
一种处理从源节点发送到目的节点的数据包的方法,该方法包括:接收从源节点发送到目标节点的数据包;通过访问数据包中的信息将数据包与活动流相关联;执行深度包检测(DPI)以识别与活动流相关联的应用;将应用标识信息与数据包相关联;向目的节点转发包含应用标识信息的数据包;在下游设备处的属于活动流的至少一个数据 (memory usage) 包上执行针对特定应用的处理,下游设备通过从数据包中提取应用标识信息来识别与活动流相关联的应用。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (少一个数据) tracking .
CN101978677A
CLAIM 1
一种处理从源节点发送到目的节点的数据包的方法,该方法包括:接收从源节点发送到目标节点的数据包;通过访问数据包中的信息将数据包与活动流相关联;执行深度包检测(DPI)以识别与活动流相关联的应用;将应用标识信息与数据包相关联;向目的节点转发包含应用标识信息的数据包;在下游设备处的属于活动流的至少一个数据 (memory usage) 包上执行针对特定应用的处理,下游设备通过从数据包中提取应用标识信息来识别与活动流相关联的应用。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101978677A
CLAIM 7
. 一种在网络 (community cloud) 中处理业务的设备,该设备包括:通信模块,用于接收和转发从源节点发送到目的节点的数据包;以及 处理器,配置为:通过访问存储在数据包中的信息识别与数据包相关联的活动流; 执行深度包检测(DPI)以识别与活动流相关联的应用;以及将应用标识信息与数据包相关联,其中由下游设备从数据包中提取应用标识信息来识 别与活动流相关联的应用。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions (指令进行) that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (少一个数据) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101978677A
CLAIM 1
一种处理从源节点发送到目的节点的数据包的方法,该方法包括:接收从源节点发送到目标节点的数据包;通过访问数据包中的信息将数据包与活动流相关联;执行深度包检测(DPI)以识别与活动流相关联的应用;将应用标识信息与数据包相关联;向目的节点转发包含应用标识信息的数据包;在下游设备处的属于活动流的至少一个数据 (memory usage) 包上执行针对特定应用的处理,下游设备通过从数据包中提取应用标识信息来识别与活动流相关联的应用。

CN101978677A
CLAIM 10
. 一种由用于处理从源节点发送到目的节点的数据包的指令进行 (therein instructions) 编码的计算机可读 介质,该计算机可读介质包括:用于接收从源节点发送到目标节点的数据包的指令; 用于通过访问数据包中的信息将数据包与活动流相关联的指令; 用于执行深度包检测(DPI)以识别与活动流相关联的应用的指令; 用于将应用标识信息与数据包相关联的指令; 用于将包含应用标识信息的数据包向目的节点转发的指令;用于在下游设备处的属于活动流的至少一个数据包上执行针对特定应用的处理的指 令,下游设备通过从数据包中提取应用标识信息来识别与活动流相关联的应用。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (少一个数据) tracking .
CN101978677A
CLAIM 1
一种处理从源节点发送到目的节点的数据包的方法,该方法包括:接收从源节点发送到目标节点的数据包;通过访问数据包中的信息将数据包与活动流相关联;执行深度包检测(DPI)以识别与活动流相关联的应用;将应用标识信息与数据包相关联;向目的节点转发包含应用标识信息的数据包;在下游设备处的属于活动流的至少一个数据 (memory usage) 包上执行针对特定应用的处理,下游设备通过从数据包中提取应用标识信息来识别与活动流相关联的应用。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101978677A
CLAIM 7
. 一种在网络 (community cloud) 中处理业务的设备,该设备包括:通信模块,用于接收和转发从源节点发送到目的节点的数据包;以及 处理器,配置为:通过访问存储在数据包中的信息识别与数据包相关联的活动流; 执行深度包检测(DPI)以识别与活动流相关联的应用;以及将应用标识信息与数据包相关联,其中由下游设备从数据包中提取应用标识信息来识 别与活动流相关联的应用。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions (指令进行) that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (少一个数据) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101978677A
CLAIM 1
一种处理从源节点发送到目的节点的数据包的方法,该方法包括:接收从源节点发送到目标节点的数据包;通过访问数据包中的信息将数据包与活动流相关联;执行深度包检测(DPI)以识别与活动流相关联的应用;将应用标识信息与数据包相关联;向目的节点转发包含应用标识信息的数据包;在下游设备处的属于活动流的至少一个数据 (memory usage) 包上执行针对特定应用的处理,下游设备通过从数据包中提取应用标识信息来识别与活动流相关联的应用。

CN101978677A
CLAIM 10
. 一种由用于处理从源节点发送到目的节点的数据包的指令进行 (therein instructions) 编码的计算机可读 介质,该计算机可读介质包括:用于接收从源节点发送到目标节点的数据包的指令; 用于通过访问数据包中的信息将数据包与活动流相关联的指令; 用于执行深度包检测(DPI)以识别与活动流相关联的应用的指令; 用于将应用标识信息与数据包相关联的指令; 用于将包含应用标识信息的数据包向目的节点转发的指令;用于在下游设备处的属于活动流的至少一个数据包上执行针对特定应用的处理的指 令,下游设备通过从数据包中提取应用标识信息来识别与活动流相关联的应用。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (少一个数据) tracking .
CN101978677A
CLAIM 1
一种处理从源节点发送到目的节点的数据包的方法,该方法包括:接收从源节点发送到目标节点的数据包;通过访问数据包中的信息将数据包与活动流相关联;执行深度包检测(DPI)以识别与活动流相关联的应用;将应用标识信息与数据包相关联;向目的节点转发包含应用标识信息的数据包;在下游设备处的属于活动流的至少一个数据 (memory usage) 包上执行针对特定应用的处理,下游设备通过从数据包中提取应用标识信息来识别与活动流相关联的应用。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101978677A
CLAIM 7
. 一种在网络 (community cloud) 中处理业务的设备,该设备包括:通信模块,用于接收和转发从源节点发送到目的节点的数据包;以及 处理器,配置为:通过访问存储在数据包中的信息识别与数据包相关联的活动流; 执行深度包检测(DPI)以识别与活动流相关联的应用;以及将应用标识信息与数据包相关联,其中由下游设备从数据包中提取应用标识信息来识 别与活动流相关联的应用。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2010089626A1

Filed: 2009-02-04     Issued: 2010-08-12

Hybrid program balancing

(Original Assignee) Telefonaktiebolaget L M Ericsson (Publ)     

Ake Arvidsson, Per Anders Holmberg
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines) , comprising : determining a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (load data) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2010089626A1
CLAIM 1
. A method of balancing loads in a system having a plurality of processing elements , the method comprising : executing a plurality of load balancing algorithms in a dry run on load data (processor usage) from the system ;
recording results of each of the load balancing algorithms ;
evaluating the results of each of the load balancing algorithms ;
selecting the load balancing algorithm providing the best results ;
and implementing the results of the selected algorithm on the system .

WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based processor usage (load data) tracking .
WO2010089626A1
CLAIM 1
. A method of balancing loads in a system having a plurality of processing elements , the method comprising : executing a plurality of load balancing algorithms in a dry run on load data (processor usage) from the system ;
recording results of each of the load balancing algorithms ;
evaluating the results of each of the load balancing algorithms ;
selecting the load balancing algorithm providing the best results ;
and implementing the results of the selected algorithm on the system .

WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (load data) , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2010089626A1
CLAIM 1
. A method of balancing loads in a system having a plurality of processing elements , the method comprising : executing a plurality of load balancing algorithms in a dry run on load data (processor usage) from the system ;
recording results of each of the load balancing algorithms ;
evaluating the results of each of the load balancing algorithms ;
selecting the load balancing algorithm providing the best results ;
and implementing the results of the selected algorithm on the system .

WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (load data) tracking .
WO2010089626A1
CLAIM 1
. A method of balancing loads in a system having a plurality of processing elements , the method comprising : executing a plurality of load balancing algorithms in a dry run on load data (processor usage) from the system ;
recording results of each of the load balancing algorithms ;
evaluating the results of each of the load balancing algorithms ;
selecting the load balancing algorithm providing the best results ;
and implementing the results of the selected algorithm on the system .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (virtual machines) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (processing elements) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (load data) , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2010089626A1
CLAIM 1
. A method of balancing loads in a system having a plurality of processing elements (I/O access rate) , the method comprising : executing a plurality of load balancing algorithms in a dry run on load data (processor usage) from the system ;
recording results of each of the load balancing algorithms ;
evaluating the results of each of the load balancing algorithms ;
selecting the load balancing algorithm providing the best results ;
and implementing the results of the selected algorithm on the system .

WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (load data) tracking .
WO2010089626A1
CLAIM 1
. A method of balancing loads in a system having a plurality of processing elements , the method comprising : executing a plurality of load balancing algorithms in a dry run on load data (processor usage) from the system ;
recording results of each of the load balancing algorithms ;
evaluating the results of each of the load balancing algorithms ;
selecting the load balancing algorithm providing the best results ;
and implementing the results of the selected algorithm on the system .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
WO2010089626A1
CLAIM 20
. The method of claim 1 , wherein the loads that are balanced are caused by virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101730150A

Filed: 2009-01-19     Issued: 2010-06-09

业务流迁移时对网络资源进行控制的方法

(Original Assignee) 中兴通讯股份有限公司     

周娜, 毕以峰, 宗在峰
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme (资源释放) based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme (资源释放) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101730150A
CLAIM 2
. 如权利要求l所述的方法,其特征在于:所述目标网络的网关是源网络和目标网络共用的分组数据网络网关P-GW,所述资源控制信息是资源释放 (first resource management scheme, second resource management scheme) 指示,所述P-GW根据所述资 源释放指示对源网络的网络资源进行是否删除源网络资源的判断及执行释放操作。

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource management scheme (资源释放) comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN101730150A
CLAIM 2
. 如权利要求l所述的方法,其特征在于:所述目标网络的网关是源网络和目标网络共用的分组数据网络网关P-GW,所述资源控制信息是资源释放 (first resource management scheme, second resource management scheme) 指示,所述P-GW根据所述资 源释放指示对源网络的网络资源进行是否删除源网络资源的判断及执行释放操作。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101730150A
CLAIM 1
一种业务流迁移时对网络资源进行控制的方法,其特征在于:业务流迁移过程中,用户终端UE将资源控制信息传送给目标网络的网关,源网络或目标网络的网关根据所述资源控制信息或相应的服务质量QoS策略对所在网络 (community cloud) 的网络资源进行控制操作。

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the second resource management scheme (资源释放) comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN101730150A
CLAIM 2
. 如权利要求l所述的方法,其特征在于:所述目标网络的网关是源网络和目标网络共用的分组数据网络网关P-GW,所述资源控制信息是资源释放 (first resource management scheme, second resource management scheme) 指示,所述P-GW根据所述资 源释放指示对源网络的网络资源进行是否删除源网络资源的判断及执行释放操作。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme (资源释放) based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme (资源释放) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101730150A
CLAIM 2
. 如权利要求l所述的方法,其特征在于:所述目标网络的网关是源网络和目标网络共用的分组数据网络网关P-GW,所述资源控制信息是资源释放 (first resource management scheme, second resource management scheme) 指示,所述P-GW根据所述资 源释放指示对源网络的网络资源进行是否删除源网络资源的判断及执行释放操作。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101730150A
CLAIM 1
一种业务流迁移时对网络资源进行控制的方法,其特征在于:业务流迁移过程中,用户终端UE将资源控制信息传送给目标网络的网关,源网络或目标网络的网关根据所述资源控制信息或相应的服务质量QoS策略对所在网络 (community cloud) 的网络资源进行控制操作。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme (资源释放) based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme (资源释放) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101730150A
CLAIM 2
. 如权利要求l所述的方法,其特征在于:所述目标网络的网关是源网络和目标网络共用的分组数据网络网关P-GW,所述资源控制信息是资源释放 (first resource management scheme, second resource management scheme) 指示,所述P-GW根据所述资 源释放指示对源网络的网络资源进行是否删除源网络资源的判断及执行释放操作。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101730150A
CLAIM 1
一种业务流迁移时对网络资源进行控制的方法,其特征在于:业务流迁移过程中,用户终端UE将资源控制信息传送给目标网络的网关,源网络或目标网络的网关根据所述资源控制信息或相应的服务质量QoS策略对所在网络 (community cloud) 的网络资源进行控制操作。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20090182868A1

Filed: 2008-12-23     Issued: 2009-07-16

Automated network infrastructure test and diagnostic system and method therefor

(Original Assignee) Circadence Corp     (Current Assignee) Sons Of Innovation LLC

Marlin Popeye McFate, Mark Vange
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources (network performance) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (network performance) , memory usage (network performance) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (bandwidth usage) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US20090182868A1
CLAIM 17
. The method of claim 14 further comprising interpreting said test results to determine bandwidth usage (maximum capacity) .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network performance) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network performance) using the LIRS replacement scheme comprises using LIRS based processor usage (network performance) tracking .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network performance) using the LIRS replacement scheme comprises using LIRS based memory usage (network performance) tracking .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (network performance) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network performance) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (network performance) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (network performance) , memory usage (network performance) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (bandwidth usage) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US20090182868A1
CLAIM 17
. The method of claim 14 further comprising interpreting said test results to determine bandwidth usage (maximum capacity) .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network performance) using a low inter-reference recency set (LIRS) replacement scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (network performance) tracking .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (network performance) tracking .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (network performance) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network performance) using least recently used (LRU) replacement scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (network performance) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (network performance) , memory usage (network performance) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (bandwidth usage) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US20090182868A1
CLAIM 17
. The method of claim 14 further comprising interpreting said test results to determine bandwidth usage (maximum capacity) .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network performance) using a low inter-reference recency set (LIRS) replacement scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (network performance) tracking .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (network performance) tracking .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (network performance) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network performance) using least recently used (LRU) replacement scheme .
US20090182868A1
CLAIM 16
. The method of claim 14 further comprising interpreting said test results to determine network performance (cloud resources, processor usage, memory usage, memory usage tracking) .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101404624A

Filed: 2008-09-28     Issued: 2009-04-08

对媒体项目的下载进行优先级排序的系统和方法

(Original Assignee) 音乐会技术公司     

S·L·彼得森
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate (数据速率) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101404624A
CLAIM 21
、 根据权利要求16所述的设备,其中,所述请求还包括信息,所 述信息由包括以下各项的组构成:媒体项目列表尺寸、媒体项目列表、 用户标识符、估计下载速度、以及网络数据速率 (consumption rate, CPU consumption rate, memory consumption rate, I/O access rate)

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (网络数据) , or resources included in virtual private networks (VPNs) .
CN101404624A
CLAIM 21
、 根据权利要求16所述的设备,其中,所述请求还包括信息,所 述信息由包括以下各项的组构成:媒体项目列表尺寸、媒体项目列表、 用户标识符、估计下载速度、以及网络数据 (Internet resources) 速率。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate (数据速率) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101404624A
CLAIM 21
、 根据权利要求16所述的设备,其中,所述请求还包括信息,所 述信息由包括以下各项的组构成:媒体项目列表尺寸、媒体项目列表、 用户标识符、估计下载速度、以及网络数据速率 (consumption rate, CPU consumption rate, memory consumption rate, I/O access rate)

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (网络数据) , or resources included in virtual private networks (VPNs) .
CN101404624A
CLAIM 21
、 根据权利要求16所述的设备,其中,所述请求还包括信息,所 述信息由包括以下各项的组构成:媒体项目列表尺寸、媒体项目列表、 用户标识符、估计下载速度、以及网络数据 (Internet resources) 速率。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate (数据速率) of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (数据速率) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101404624A
CLAIM 21
、 根据权利要求16所述的设备,其中,所述请求还包括信息,所 述信息由包括以下各项的组构成:媒体项目列表尺寸、媒体项目列表、 用户标识符、估计下载速度、以及网络数据速率 (consumption rate, CPU consumption rate, memory consumption rate, I/O access rate)

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (网络数据) , or resources included in virtual private networks (VPNs) .
CN101404624A
CLAIM 21
、 根据权利要求16所述的设备,其中,所述请求还包括信息,所 述信息由包括以下各项的组构成:媒体项目列表尺寸、媒体项目列表、 用户标识符、估计下载速度、以及网络数据 (Internet resources) 速率。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20100162261A1

Filed: 2008-05-05     Issued: 2010-06-24

Method and System for Load Balancing in a Distributed Computer System

(Original Assignee) PES INST OF Tech     (Current Assignee) PES INST OF Tech

Laksmikantha Hosahally Shashidhara
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (job request) , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20100162261A1
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii) circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (job request) , Internet resources (available resource) , or resources included in virtual private networks (VPNs) .
US20100162261A1
CLAIM 4
. A load balancing method according to claim 1 further comprising transferring the job to the idle computer based on one or more among job priority , job size , available resource (Internet resources) s , job arrival time and job processing time .

US20100162261A1
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii) circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (job request) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20100162261A1
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii) circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (job request) , Internet resources (available resource) , or resources included in virtual private networks (VPNs) .
US20100162261A1
CLAIM 4
. A load balancing method according to claim 1 further comprising transferring the job to the idle computer based on one or more among job priority , job size , available resource (Internet resources) s , job arrival time and job processing time .

US20100162261A1
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii) circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (job request) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20100162261A1
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii) circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (job request) , Internet resources (available resource) , or resources included in virtual private networks (VPNs) .
US20100162261A1
CLAIM 4
. A load balancing method according to claim 1 further comprising transferring the job to the idle computer based on one or more among job priority , job size , available resource (Internet resources) s , job arrival time and job processing time .

US20100162261A1
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii) circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2008142705A2

Filed: 2008-05-05     Issued: 2008-11-27

A method and system for load balancing in a distributed computer system

(Original Assignee) Pes Institute Of Technology     

Hosahally Lakshmikantha Shashidhara, Chennagiri Anandachar Latha
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (job request) , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008142705A2
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii)circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (job request) , Internet resources (available resource) , or resources included in virtual private networks (VPNs) .
WO2008142705A2
CLAIM 4
. A load balancing method according to claim 1 further comprising transferring the job to the idle computer based on one or more among job priority , job size , available resource (Internet resources) s , job arrival time and job processing time .

WO2008142705A2
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii)circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (job request) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008142705A2
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii)circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (job request) , Internet resources (available resource) , or resources included in virtual private networks (VPNs) .
WO2008142705A2
CLAIM 4
. A load balancing method according to claim 1 further comprising transferring the job to the idle computer based on one or more among job priority , job size , available resource (Internet resources) s , job arrival time and job processing time .

WO2008142705A2
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii)circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (job request) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008142705A2
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii)circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (job request) , Internet resources (available resource) , or resources included in virtual private networks (VPNs) .
WO2008142705A2
CLAIM 4
. A load balancing method according to claim 1 further comprising transferring the job to the idle computer based on one or more among job priority , job size , available resource (Internet resources) s , job arrival time and job processing time .

WO2008142705A2
CLAIM 14
. A load balancing method using a busy token , in a distributed computer system , comprising : (i) connecting a plurality of computers in a substantial logical ring architecture based on one or more predetermined criteria ;
(ii) counting the number of idle and overloaded computers periodically ;
(iii)circulating at least one predetermined busy token through the logical ring if the number of overloaded computers exceeds the number of idle computers ;
(iv) configuring at least one overloaded computer to acquire the busy token , frame and thereby circulate a message indicative of an overload status and the required resources for completing a predetermined job to other computers in the logical ring ;
and (v) configuring at least one idle computer to check the message and provide a job request (hybrid cloud, cloud computing environment) to the overloaded computer depending upon the overload status and availability of required resources for completing the job , wherein the overloaded computer transfers the job to the idle computer subsequent to the request .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101663647A

Filed: 2008-04-25     Issued: 2010-03-03

决定是在本地启动应用还是远程启动应用作为webapp的装置

(Original Assignee) 高通股份有限公司     

N·纳加拉杰
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (的使用) , memory usage (的使用) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (处理能力) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101663647A
CLAIM 7
、 根据权利要求1所述的方法,其中,所述资源选自由以下各项构成 的组:电池容量、存储器容量、处理能力 (maximum capacity) 容量、电池使用量、存储器使用 量和处理能力使用量。

CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (的使用) tracking .
CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (的使用) tracking .
CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (的使用) , memory usage (的使用) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (处理能力) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101663647A
CLAIM 7
、 根据权利要求1所述的方法,其中,所述资源选自由以下各项构成 的组:电池容量、存储器容量、处理能力 (maximum capacity) 容量、电池使用量、存储器使用 量和处理能力使用量。

CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (的使用) tracking .
CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (的使用) tracking .
CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (的使用) , memory usage (的使用) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (处理能力) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101663647A
CLAIM 7
、 根据权利要求1所述的方法,其中,所述资源选自由以下各项构成 的组:电池容量、存储器容量、处理能力 (maximum capacity) 容量、电池使用量、存储器使用 量和处理能力使用量。

CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (的使用) tracking .
CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (的使用) tracking .
CN101663647A
CLAIM 14
、 根据权利要求12所述的移动通信装置,其中,所述判断包括判断 如果在所述移动通信装置上启动和执行所述应用程序的第一实例,所述资 源的使用 (processor usage, memory usage, processor usage tracking) 量阈值是否会被超过。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20090235265A1

Filed: 2008-03-12     Issued: 2009-09-17

Method and system for cost avoidance in virtualized computing environments

(Original Assignee) International Business Machines Corp     (Current Assignee) ServiceNow Inc

Christopher J. DAWSON, Carl P. Gusler, II Rick A. Hamilton, James W. Seaman
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (network resource) , comprising : determining a consumption rate of cloud resources (network resource) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (memory resource) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource (processor usage) , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource) using the LIRS replacement scheme comprises using LIRS based processor usage (memory resource) tracking .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource (processor usage) , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (network resource) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (network resource) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (memory resource) , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (network resource) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource (processor usage) , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource) using a low inter-reference recency set (LIRS) replacement scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (memory resource) tracking .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource (processor usage) , an input/output (I/O) resource , and a network resource .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (network resource) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource) using least recently used (LRU) replacement scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (network resource) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (network resource) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (memory resource) , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource (processor usage) , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource) using a low inter-reference recency set (LIRS) replacement scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (memory resource) tracking .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource (processor usage) , an input/output (I/O) resource , and a network resource .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (network resource) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource) using least recently used (LRU) replacement scheme .
US20090235265A1
CLAIM 2
. The method of claim 1 , wherein the resources comprise at least one of a CPU resource , a memory resource , an input/output (I/O) resource , and a network resource (cloud resources, cloud computing environment, alternate cloud resources) .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101889264A

Filed: 2007-11-02     Issued: 2010-11-17

可配置系统事件和资源仲裁管理的设备和方法

(Original Assignee) 高通股份有限公司     

天宇·里·达莫雷, 乌平德·辛格·巴巴尔, 戴维·C·帕克, 斯里尼瓦桑·巴拉苏布拉马尼安
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (一种计算) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (计算机程序产品) .
CN101889264A
CLAIM 25
. 一种计算机程序产品 (processor usage tracking) ,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (一种计算) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking (计算机程序产品) .
CN101889264A
CLAIM 25
. 一种计算机程序产品 (processor usage tracking) ,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage tracking .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (一种计算) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking (计算机程序产品) .
CN101889264A
CLAIM 25
. 一种计算机程序产品 (processor usage tracking) ,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage tracking .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN101889264A
CLAIM 25
. 一种计算 (cloud computing, cloud computing resource manager) 机程序产品,其包括:计算机可读媒体,其含有用于管理对用于移动装置应用程序的系统资源的分配的指 令,所述指令包括:用于将特权码、优先权等级或所述特权码和所述优先权等级与移动装置的移动应用程 序关联的至少一个指令;以及用于同意或拒绝所述移动应用程序对系统资源的接入的至少一个指令,同意或拒绝至 少基于所述优先权等级、所述特权码或所述优先权等级和所述特权码。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101126992A

Filed: 2007-07-13     Issued: 2008-02-20

在网络中的多个节点中分配多个任务的方法和系统

(Original Assignee) 国际商业机器公司     

苏工, 伯纳德·R.·皮尔斯, 唐纳德·W.·施密特, 斯蒂芬·J.·海斯格, 唐娜·N.·第兰博格, 格雷格·A.·狄克
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (的使用) , memory usage (的使用) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (的使用) tracking .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (的使用) tracking .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101126992A
CLAIM 12
. —种用于在网络 (community cloud) 中的多个节点中分配多个任务的系统,所述 系统包括:用于执行任务的多个处理器; 包括处理器的多个节点;任务调度器,用于: 接收多个任务;计算多个任务的任务处理器消耗值; 计算多个节点的节点处理器消耗值;计算多个节点的目标节点处理器消耗值,所述目标节点处理器消耗值表明最佳节点处理器消耗;以及根据节点i的计算的节点处理器消耗值与节点i的目标节点处理器消耗值之间的差来计算负载指数值;以及 平衡器,用于根据每个节点的计算的负载指数值在节点中分配任 务,以便在节点中平衡处理器工作负载,使得每个节点的计算的负载 指数值基本上为零。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (的使用) , memory usage (的使用) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (的使用) tracking .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (的使用) tracking .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101126992A
CLAIM 12
. —种用于在网络 (community cloud) 中的多个节点中分配多个任务的系统,所述 系统包括:用于执行任务的多个处理器; 包括处理器的多个节点;任务调度器,用于: 接收多个任务;计算多个任务的任务处理器消耗值; 计算多个节点的节点处理器消耗值;计算多个节点的目标节点处理器消耗值,所述目标节点处理器消耗值表明最佳节点处理器消耗;以及根据节点i的计算的节点处理器消耗值与节点i的目标节点处理器消耗值之间的差来计算负载指数值;以及 平衡器,用于根据每个节点的计算的负载指数值在节点中分配任 务,以便在节点中平衡处理器工作负载,使得每个节点的计算的负载 指数值基本上为零。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (的使用) , memory usage (的使用) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (的使用) tracking .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (的使用) tracking .
CN101126992A
CLAIM 6
. 如权利要求5所述的方法,还包括如下步骤:扩展多维平衡,使得每一维可以表示对应于多个处理器类型的节 点;以及根据预定的使用 (processor usage, memory usage, processor usage tracking) 规则来重新布置单元中的任务,使得任务的重新 布置是非对称的。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101126992A
CLAIM 12
. —种用于在网络 (community cloud) 中的多个节点中分配多个任务的系统,所述 系统包括:用于执行任务的多个处理器; 包括处理器的多个节点;任务调度器,用于: 接收多个任务;计算多个任务的任务处理器消耗值; 计算多个节点的节点处理器消耗值;计算多个节点的目标节点处理器消耗值,所述目标节点处理器消耗值表明最佳节点处理器消耗;以及根据节点i的计算的节点处理器消耗值与节点i的目标节点处理器消耗值之间的差来计算负载指数值;以及 平衡器,用于根据每个节点的计算的负载指数值在节点中分配任 务,以便在节点中平衡处理器工作负载,使得每个节点的计算的负载 指数值基本上为零。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101098306A

Filed: 2007-04-26     Issued: 2008-01-02

在流处理系统中控制何时发送消息的方法和系统

(Original Assignee) 国际商业机器公司     

罗伯特·E.·斯托姆
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (一种计算) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates (可用的) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

CN101098306A
CLAIM 8
. 根据权利要求1的计算机实施的方法,其中,所述当前状态是多个状态之一,并且所述多个状态包括所述阈值、前k个转换可用的 (access rates) 若干元组、与瞬间所述成本相关联的若干消息的加权合计以及在所述瞬间有少于K个可用元组的处罚。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (包括控制) .
CN101098306A
CLAIM 11
. 一种系统,包括: 数据处理系统,用于从输入流接收流数据,其中,所述数据处理系统利用概率统计和成本函数确定策略,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息,以便使每时间阶段发送所述消息的预期成本最小化,并将若干片段分配到若干主机以进行相关处理;以及 多个主机,可操作地连接到所述数据处理系统,其中所述消息在所述多个主机的每一个之间发送; 其中,所述数据处理系统向所述多个主机发送所述策略,以便控制何时在所述多个主机之间发送所述消息, 其中,所述多个主机中的发送主机操作过滤器,所述过滤器根据阈值选择从发送转换发送哪些所述消息,以及 所述多个主机中的接收主机包括控制 (processor usage tracking) 器,所述控制器检测所述阈值和接收转换的当前状态以实施所述策略,并且判断是否改变所述阈值。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101098306A
CLAIM 14
. 根据权利要求11的系统,其中,所述成本函数标定多么不期望在网络 (community cloud) 链接上发送业务,以及多么不期望在显示结果时有不必要的延时。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (一种计算) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates (可用的) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

CN101098306A
CLAIM 8
. 根据权利要求1的计算机实施的方法,其中,所述当前状态是多个状态之一,并且所述多个状态包括所述阈值、前k个转换可用的 (access rates) 若干元组、与瞬间所述成本相关联的若干消息的加权合计以及在所述瞬间有少于K个可用元组的处罚。

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking (包括控制) .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

CN101098306A
CLAIM 11
. 一种系统,包括: 数据处理系统,用于从输入流接收流数据,其中,所述数据处理系统利用概率统计和成本函数确定策略,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息,以便使每时间阶段发送所述消息的预期成本最小化,并将若干片段分配到若干主机以进行相关处理;以及 多个主机,可操作地连接到所述数据处理系统,其中所述消息在所述多个主机的每一个之间发送; 其中,所述数据处理系统向所述多个主机发送所述策略,以便控制何时在所述多个主机之间发送所述消息, 其中,所述多个主机中的发送主机操作过滤器,所述过滤器根据阈值选择从发送转换发送哪些所述消息,以及 所述多个主机中的接收主机包括控制 (processor usage tracking) 器,所述控制器检测所述阈值和接收转换的当前状态以实施所述策略,并且判断是否改变所述阈值。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage tracking .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101098306A
CLAIM 14
. 根据权利要求11的系统,其中,所述成本函数标定多么不期望在网络 (community cloud) 链接上发送业务,以及多么不期望在显示结果时有不必要的延时。

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (一种计算) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (可用的) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

CN101098306A
CLAIM 8
. 根据权利要求1的计算机实施的方法,其中,所述当前状态是多个状态之一,并且所述多个状态包括所述阈值、前k个转换可用的 (access rates) 若干元组、与瞬间所述成本相关联的若干消息的加权合计以及在所述瞬间有少于K个可用元组的处罚。

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based processor usage tracking (包括控制) .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

CN101098306A
CLAIM 11
. 一种系统,包括: 数据处理系统,用于从输入流接收流数据,其中,所述数据处理系统利用概率统计和成本函数确定策略,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息,以便使每时间阶段发送所述消息的预期成本最小化,并将若干片段分配到若干主机以进行相关处理;以及 多个主机,可操作地连接到所述数据处理系统,其中所述消息在所述多个主机的每一个之间发送; 其中,所述数据处理系统向所述多个主机发送所述策略,以便控制何时在所述多个主机之间发送所述消息, 其中,所述多个主机中的发送主机操作过滤器,所述过滤器根据阈值选择从发送转换发送哪些所述消息,以及 所述多个主机中的接收主机包括控制 (processor usage tracking) 器,所述控制器检测所述阈值和接收转换的当前状态以实施所述策略,并且判断是否改变所述阈值。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to use LIRS based memory usage tracking .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101098306A
CLAIM 14
. 根据权利要求11的系统,其中,所述成本函数标定多么不期望在网络 (community cloud) 链接上发送业务,以及多么不期望在显示结果时有不必要的延时。

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (一种计算) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN101098306A
CLAIM 1
. 一种计算 (cloud computing, cloud computing resource manager) 机实施的方法,用于在流处理系统中控制何时发送消息,所述计算机实施的方法包括: 在流处理以前利用概率统计和成本函数确定策略,其中,所述策略指明在哪些条件下急迫地发送消息,在哪些其他条件下延迟所述消息; 在流处理期间运行过滤器,所述过滤器根据阈值选择从发送转换发送哪些消息;以及 在流处理期间运行控制器,所述控制器观察接收转换的当前状态并根据所述当前状态应用策略,以判断是否改变所述阈值。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
CN101410803A

Filed: 2007-01-24     Issued: 2009-04-15

用于提供对计算环境的访问的方法和系统

(Original Assignee) 思杰系统有限公司     

D·N·罗宾森, B·J·佩德森, R·J·克罗夫特, A·E·洛, R·J·玛扎菲利
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (少一个数据) , or input/output (I/O) access rates (用于访问) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101410803A
CLAIM 1
. 一种提供对计算环境的访问的方法,该方法包括以下步骤: (a)由代理机器接收来自客户机用于访问 (access rates, I/O access rates) 计算环境的请求,该请求包括该客户机的用户标识; (b)标识出多个虚拟机中的一个,该标识出的虚拟机提供所请求的计算环境; (c)标识出多个执行机器中的一个,该标识出的执行机器执行一个管理程序,该管理程序提供对所标识出的虚拟机需要的硬件资源的访问;和 (d)在该客户机和标识出的虚拟机之间建立连接。

CN101410803A
CLAIM 279
. 权利要求278的方法,还包括:由代理服务器更新与标识出的计 算环境相关联的至少一个数据 (memory usage) 记录,标识出的计算环境表示该客户机和标 识出的计算环境被断开。

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (代理相关) .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关 (processor usage tracking) 联的标识符之后,由网络浏览器启动客户机代理。

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (少一个数据) tracking .
CN101410803A
CLAIM 279
. 权利要求278的方法,还包括:由代理服务器更新与标识出的计 算环境相关联的至少一个数据 (memory usage) 记录,标识出的计算环境表示该客户机和标 识出的计算环境被断开。

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101410803A
CLAIM 259
. 权利要求258的方法,其中步骤(a)还包括:在网络 (community cloud) 连接上请 求该资源。

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager (一个表) to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (少一个数据) , or I/O access rates (用于访问) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101410803A
CLAIM 1
. 一种提供对计算环境的访问的方法,该方法包括以下步骤: (a)由代理机器接收来自客户机用于访问 (access rates, I/O access rates) 计算环境的请求,该请求包括该客户机的用户标识; (b)标识出多个虚拟机中的一个,该标识出的虚拟机提供所请求的计算环境; (c)标识出多个执行机器中的一个,该标识出的执行机器执行一个管理程序,该管理程序提供对所标识出的虚拟机需要的硬件资源的访问;和 (d)在该客户机和标识出的虚拟机之间建立连接。

CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关联的标识符之后,由网络浏览器启动客户机代理。

CN101410803A
CLAIM 279
. 权利要求278的方法,还包括:由代理服务器更新与标识出的计 算环境相关联的至少一个数据 (memory usage) 记录,标识出的计算环境表示该客户机和标 识出的计算环境被断开。

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager (一个表) to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关联的标识符之后,由网络浏览器启动客户机代理。

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager (一个表) to use LIRS based processor usage tracking (代理相关) .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关 (processor usage tracking) 联的标识符之后,由网络浏览器启动客户机代理。

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager (一个表) to use LIRS based memory usage (少一个数据) tracking .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关联的标识符之后,由网络浏览器启动客户机代理。

CN101410803A
CLAIM 279
. 权利要求278的方法,还包括:由代理服务器更新与标识出的计 算环境相关联的至少一个数据 (memory usage) 记录,标识出的计算环境表示该客户机和标 识出的计算环境被断开。

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101410803A
CLAIM 259
. 权利要求258的方法,其中步骤(a)还包括:在网络 (community cloud) 连接上请 求该资源。

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager (一个表) to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关联的标识符之后,由网络浏览器启动客户机代理。

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager (一个表) communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (少一个数据) , I/O access rates (用于访问) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
CN101410803A
CLAIM 1
. 一种提供对计算环境的访问的方法,该方法包括以下步骤: (a)由代理机器接收来自客户机用于访问 (access rates, I/O access rates) 计算环境的请求,该请求包括该客户机的用户标识; (b)标识出多个虚拟机中的一个,该标识出的虚拟机提供所请求的计算环境; (c)标识出多个执行机器中的一个,该标识出的执行机器执行一个管理程序,该管理程序提供对所标识出的虚拟机需要的硬件资源的访问;和 (d)在该客户机和标识出的虚拟机之间建立连接。

CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关联的标识符之后,由网络浏览器启动客户机代理。

CN101410803A
CLAIM 279
. 权利要求278的方法,还包括:由代理服务器更新与标识出的计 算环境相关联的至少一个数据 (memory usage) 记录,标识出的计算环境表示该客户机和标 识出的计算环境被断开。

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager (一个表) to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关联的标识符之后,由网络浏览器启动客户机代理。

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager (一个表) to use LIRS based processor usage tracking (代理相关) .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关 (processor usage tracking) 联的标识符之后,由网络浏览器启动客户机代理。

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager (一个表) to use LIRS based memory usage (少一个数据) tracking .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关联的标识符之后,由网络浏览器启动客户机代理。

CN101410803A
CLAIM 279
. 权利要求278的方法,还包括:由代理服务器更新与标识出的计 算环境相关联的至少一个数据 (memory usage) 记录,标识出的计算环境表示该客户机和标 识出的计算环境被断开。

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud (在网络) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
CN101410803A
CLAIM 259
. 权利要求258的方法,其中步骤(a)还包括:在网络 (community cloud) 连接上请 求该资源。

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager (一个表) to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
CN101410803A
CLAIM 42
. 权利要求38的方法,其中,步骤(c)还包括:在成功匹配该超 链接配置文件中的 一个表 (cloud computing resource manager) 项和网络浏览器可访问的 一个注册文件中的与客 户机代理相关联的标识符之后,由网络浏览器启动客户机代理。




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20080163239A1

Filed: 2006-12-29     Issued: 2008-07-03

Method for dynamic load balancing on partitioned systems

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

Suresh Sugumar, Kiran Panesar
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (more processor cores) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080163239A1
CLAIM 2
. The computing apparatus of claim 1 , wherein the first and second partitions each comprise one or more processor cores (memory usage, memory usage tracking) .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (more processor cores) tracking .
US20080163239A1
CLAIM 2
. The computing apparatus of claim 1 , wherein the first and second partitions each comprise one or more processor cores (memory usage, memory usage tracking) .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions (second instruction, first instruction) that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (more processor cores) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080163239A1
CLAIM 2
. The computing apparatus of claim 1 , wherein the first and second partitions each comprise one or more processor cores (memory usage, memory usage tracking) .

US20080163239A1
CLAIM 28
. An article of manufacture , including a computer-readable medium having instructions to be executed by a machine , the machine having a plurality of partitions , wherein each partition includes one or more virtual machines to execute thereon , the instructions comprising : a first instruction (therein instructions) to determine a first load status of a first partition ;
a second instruction (therein instructions) to determine a second load status of a second partition ;
and a third instruction to migrate one of the virtual machines between the first partition and the second partition when the first load status and the second load status match a pre-determined criteria .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (more processor cores) tracking .
US20080163239A1
CLAIM 2
. The computing apparatus of claim 1 , wherein the first and second partitions each comprise one or more processor cores (memory usage, memory usage tracking) .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions (second instruction, first instruction) that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (more processor cores) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080163239A1
CLAIM 2
. The computing apparatus of claim 1 , wherein the first and second partitions each comprise one or more processor cores (memory usage, memory usage tracking) .

US20080163239A1
CLAIM 28
. An article of manufacture , including a computer-readable medium having instructions to be executed by a machine , the machine having a plurality of partitions , wherein each partition includes one or more virtual machines to execute thereon , the instructions comprising : a first instruction (therein instructions) to determine a first load status of a first partition ;
a second instruction (therein instructions) to determine a second load status of a second partition ;
and a third instruction to migrate one of the virtual machines between the first partition and the second partition when the first load status and the second load status match a pre-determined criteria .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (more processor cores) tracking .
US20080163239A1
CLAIM 2
. The computing apparatus of claim 1 , wherein the first and second partitions each comprise one or more processor cores (memory usage, memory usage tracking) .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20070079308A1

Filed: 2005-09-30     Issued: 2007-04-05

Managing virtual machines

(Original Assignee) Computer Associates Think Inc     (Current Assignee) Computer Associates Think Inc

Michael Chiaramonte, Kouros Esfahany, Karthik Narayanan
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors) , memory usage , or input/output (I/O) access rates (lower limit) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (more hardware) , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20070079308A1
CLAIM 1
. An apparatus for use in managing virtual machines , the apparatus comprising : a processor ;
and software encoded in media operable when executed on the processor to : access a poll interval and an analysis interval ;
access virtual machine criteria for at least one virtual machine having affinity for one or more hardware (I/O access rates) components , the virtual machine criteria comprising an upper limit and a lower limit (access rates) for at least one performance characteristic ;
receive periodic information comprising the at least one performance characteristic , with the periodic information being received at a rate substantially equal to the poll interval ;
determine if the at least one performance characteristic is between the upper limit and the lower limit by periodically analyzing the periodic information at a periodic rate substantially equal to the analysis interval ;
in response to a determination that the at least one performance characteristic is not between the upper limit and the lower limit , modifying the virtual machine affinity with respect to the one or more hardware components .

US20070079308A1
CLAIM 20
. Software for use in managing a plurality of virtual machines each having processing capability through association with one or more of a plurality of central processing units , the software embodied in a computer-readable medium and when executed using one or more processors (processor usage, processor usage tracking) , operable to : access performance criteria comprising : a first range , a second range , and a third range for at least one performance characteristic associated with the plurality of central processing units ;
and a fourth range and a fifth range for at least one performance characteristic associated with each of the plurality of virtual machines ;
determine a value of the at least one performance characteristic associated with the plurality of central processing units and the value of the at least one performance characteristic associated with each of the plurality of virtual machines ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the first range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fourth range for each of the plurality of virtual machines , automatically reduce the number of central processing units included in the plurality of central processing units ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the second range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fifth range for one or more of the plurality of virtual machines , automatically increase the number of central processing units included in the plurality of central processing units ;
and in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the third range , automatically increase the number of central processing units included in the plurality of central processing units .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (more processors) tracking .
US20070079308A1
CLAIM 20
. Software for use in managing a plurality of virtual machines each having processing capability through association with one or more of a plurality of central processing units , the software embodied in a computer-readable medium and when executed using one or more processors (processor usage, processor usage tracking) , operable to : access performance criteria comprising : a first range , a second range , and a third range for at least one performance characteristic associated with the plurality of central processing units ;
and a fourth range and a fifth range for at least one performance characteristic associated with each of the plurality of virtual machines ;
determine a value of the at least one performance characteristic associated with the plurality of central processing units and the value of the at least one performance characteristic associated with each of the plurality of virtual machines ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the first range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fourth range for each of the plurality of virtual machines , automatically reduce the number of central processing units included in the plurality of central processing units ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the second range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fifth range for one or more of the plurality of virtual machines , automatically increase the number of central processing units included in the plurality of central processing units ;
and in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the third range , automatically increase the number of central processing units included in the plurality of central processing units .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors) , memory usage , or I/O access rates (lower limit) (more hardware) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20070079308A1
CLAIM 1
. An apparatus for use in managing virtual machines , the apparatus comprising : a processor ;
and software encoded in media operable when executed on the processor to : access a poll interval and an analysis interval ;
access virtual machine criteria for at least one virtual machine having affinity for one or more hardware (I/O access rates) components , the virtual machine criteria comprising an upper limit and a lower limit (access rates) for at least one performance characteristic ;
receive periodic information comprising the at least one performance characteristic , with the periodic information being received at a rate substantially equal to the poll interval ;
determine if the at least one performance characteristic is between the upper limit and the lower limit by periodically analyzing the periodic information at a periodic rate substantially equal to the analysis interval ;
in response to a determination that the at least one performance characteristic is not between the upper limit and the lower limit , modifying the virtual machine affinity with respect to the one or more hardware components .

US20070079308A1
CLAIM 20
. Software for use in managing a plurality of virtual machines each having processing capability through association with one or more of a plurality of central processing units , the software embodied in a computer-readable medium and when executed using one or more processors (processor usage, processor usage tracking) , operable to : access performance criteria comprising : a first range , a second range , and a third range for at least one performance characteristic associated with the plurality of central processing units ;
and a fourth range and a fifth range for at least one performance characteristic associated with each of the plurality of virtual machines ;
determine a value of the at least one performance characteristic associated with the plurality of central processing units and the value of the at least one performance characteristic associated with each of the plurality of virtual machines ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the first range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fourth range for each of the plurality of virtual machines , automatically reduce the number of central processing units included in the plurality of central processing units ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the second range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fifth range for one or more of the plurality of virtual machines , automatically increase the number of central processing units included in the plurality of central processing units ;
and in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the third range , automatically increase the number of central processing units included in the plurality of central processing units .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (more processors) tracking .
US20070079308A1
CLAIM 20
. Software for use in managing a plurality of virtual machines each having processing capability through association with one or more of a plurality of central processing units , the software embodied in a computer-readable medium and when executed using one or more processors (processor usage, processor usage tracking) , operable to : access performance criteria comprising : a first range , a second range , and a third range for at least one performance characteristic associated with the plurality of central processing units ;
and a fourth range and a fifth range for at least one performance characteristic associated with each of the plurality of virtual machines ;
determine a value of the at least one performance characteristic associated with the plurality of central processing units and the value of the at least one performance characteristic associated with each of the plurality of virtual machines ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the first range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fourth range for each of the plurality of virtual machines , automatically reduce the number of central processing units included in the plurality of central processing units ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the second range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fifth range for one or more of the plurality of virtual machines , automatically increase the number of central processing units included in the plurality of central processing units ;
and in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the third range , automatically increase the number of central processing units included in the plurality of central processing units .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (more processors) , memory usage , I/O access rates (lower limit) (more hardware) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20070079308A1
CLAIM 1
. An apparatus for use in managing virtual machines , the apparatus comprising : a processor ;
and software encoded in media operable when executed on the processor to : access a poll interval and an analysis interval ;
access virtual machine criteria for at least one virtual machine having affinity for one or more hardware (I/O access rates) components , the virtual machine criteria comprising an upper limit and a lower limit (access rates) for at least one performance characteristic ;
receive periodic information comprising the at least one performance characteristic , with the periodic information being received at a rate substantially equal to the poll interval ;
determine if the at least one performance characteristic is between the upper limit and the lower limit by periodically analyzing the periodic information at a periodic rate substantially equal to the analysis interval ;
in response to a determination that the at least one performance characteristic is not between the upper limit and the lower limit , modifying the virtual machine affinity with respect to the one or more hardware components .

US20070079308A1
CLAIM 20
. Software for use in managing a plurality of virtual machines each having processing capability through association with one or more of a plurality of central processing units , the software embodied in a computer-readable medium and when executed using one or more processors (processor usage, processor usage tracking) , operable to : access performance criteria comprising : a first range , a second range , and a third range for at least one performance characteristic associated with the plurality of central processing units ;
and a fourth range and a fifth range for at least one performance characteristic associated with each of the plurality of virtual machines ;
determine a value of the at least one performance characteristic associated with the plurality of central processing units and the value of the at least one performance characteristic associated with each of the plurality of virtual machines ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the first range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fourth range for each of the plurality of virtual machines , automatically reduce the number of central processing units included in the plurality of central processing units ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the second range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fifth range for one or more of the plurality of virtual machines , automatically increase the number of central processing units included in the plurality of central processing units ;
and in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the third range , automatically increase the number of central processing units included in the plurality of central processing units .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (more processors) tracking .
US20070079308A1
CLAIM 20
. Software for use in managing a plurality of virtual machines each having processing capability through association with one or more of a plurality of central processing units , the software embodied in a computer-readable medium and when executed using one or more processors (processor usage, processor usage tracking) , operable to : access performance criteria comprising : a first range , a second range , and a third range for at least one performance characteristic associated with the plurality of central processing units ;
and a fourth range and a fifth range for at least one performance characteristic associated with each of the plurality of virtual machines ;
determine a value of the at least one performance characteristic associated with the plurality of central processing units and the value of the at least one performance characteristic associated with each of the plurality of virtual machines ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the first range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fourth range for each of the plurality of virtual machines , automatically reduce the number of central processing units included in the plurality of central processing units ;
in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the second range and that the at least one performance characteristic associated with each of the plurality of virtual machines is in the fifth range for one or more of the plurality of virtual machines , automatically increase the number of central processing units included in the plurality of central processing units ;
and in response to a determination that the at least one performance characteristic associated with the plurality of central processing units is in the third range , automatically increase the number of central processing units included in the plurality of central processing units .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20060155912A1

Filed: 2005-01-12     Issued: 2006-07-13

Server cluster having a virtual server

(Original Assignee) Dell Products LP     (Current Assignee) Dell Products LP

Sumankumar Singh, Timothy Abels, Peyman Najafirad
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines) , comprising : determining a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (first operating) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20060155912A1
CLAIM 6
. The server cluster of claim 1 , wherein each virtual node comprises : wherein each virtual node , comprises : a first virtual machine that is licensed and is configured to emulated the hardware and software operating environment of the associated active node ;
and a second virtual machine that is unlicensed and is configured to emulated the hardware and software operating environment of the associated active node ;
wherein the first virtual machine is operable to host the applications of an associated primary node in the event of a failure in the associated primary node ;
wherein the second virtual machine is operable to become a licensed virtual machine in the event that the first virtual machine hosts the applications of an associated primary node ;
and wherein at least one of the plurality of active nodes runs a first operating (memory usage) system and wherein another of the plurality of active nodes

US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based memory usage (first operating) tracking .
US20060155912A1
CLAIM 6
. The server cluster of claim 1 , wherein each virtual node comprises : wherein each virtual node , comprises : a first virtual machine that is licensed and is configured to emulated the hardware and software operating environment of the associated active node ;
and a second virtual machine that is unlicensed and is configured to emulated the hardware and software operating environment of the associated active node ;
wherein the first virtual machine is operable to host the applications of an associated primary node in the event of a failure in the associated primary node ;
wherein the second virtual machine is operable to become a licensed virtual machine in the event that the first virtual machine hosts the applications of an associated primary node ;
and wherein at least one of the plurality of active nodes runs a first operating (memory usage) system and wherein another of the plurality of active nodes

US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (first operating) , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20060155912A1
CLAIM 6
. The server cluster of claim 1 , wherein each virtual node comprises : wherein each virtual node , comprises : a first virtual machine that is licensed and is configured to emulated the hardware and software operating environment of the associated active node ;
and a second virtual machine that is unlicensed and is configured to emulated the hardware and software operating environment of the associated active node ;
wherein the first virtual machine is operable to host the applications of an associated primary node in the event of a failure in the associated primary node ;
wherein the second virtual machine is operable to become a licensed virtual machine in the event that the first virtual machine hosts the applications of an associated primary node ;
and wherein at least one of the plurality of active nodes runs a first operating (memory usage) system and wherein another of the plurality of active nodes

US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (first operating) tracking .
US20060155912A1
CLAIM 6
. The server cluster of claim 1 , wherein each virtual node comprises : wherein each virtual node , comprises : a first virtual machine that is licensed and is configured to emulated the hardware and software operating environment of the associated active node ;
and a second virtual machine that is unlicensed and is configured to emulated the hardware and software operating environment of the associated active node ;
wherein the first virtual machine is operable to host the applications of an associated primary node in the event of a failure in the associated primary node ;
wherein the second virtual machine is operable to become a licensed virtual machine in the event that the first virtual machine hosts the applications of an associated primary node ;
and wherein at least one of the plurality of active nodes runs a first operating (memory usage) system and wherein another of the plurality of active nodes

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (virtual machines) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (first operating) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20060155912A1
CLAIM 6
. The server cluster of claim 1 , wherein each virtual node comprises : wherein each virtual node , comprises : a first virtual machine that is licensed and is configured to emulated the hardware and software operating environment of the associated active node ;
and a second virtual machine that is unlicensed and is configured to emulated the hardware and software operating environment of the associated active node ;
wherein the first virtual machine is operable to host the applications of an associated primary node in the event of a failure in the associated primary node ;
wherein the second virtual machine is operable to become a licensed virtual machine in the event that the first virtual machine hosts the applications of an associated primary node ;
and wherein at least one of the plurality of active nodes runs a first operating (memory usage) system and wherein another of the plurality of active nodes

US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (first operating) tracking .
US20060155912A1
CLAIM 6
. The server cluster of claim 1 , wherein each virtual node comprises : wherein each virtual node , comprises : a first virtual machine that is licensed and is configured to emulated the hardware and software operating environment of the associated active node ;
and a second virtual machine that is unlicensed and is configured to emulated the hardware and software operating environment of the associated active node ;
wherein the first virtual machine is operable to host the applications of an associated primary node in the event of a failure in the associated primary node ;
wherein the second virtual machine is operable to become a licensed virtual machine in the event that the first virtual machine hosts the applications of an associated primary node ;
and wherein at least one of the plurality of active nodes runs a first operating (memory usage) system and wherein another of the plurality of active nodes

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20060155912A1
CLAIM 12
. A method for managing the operational status of the application of a server cluster having a plurality of active nodes and a physical standby node , comprising the steps of : establishing , within the standby node and for each active node , first and second standby virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) , wherein each virtual machine comprises an emulated representation of the operating environment of the corresponding active node ;
monitoring the utilization of each of the first standby virtual machines ;
migrating a first standby virtual machine to an active node if the operational status of the first standby virtual machine exceeds a threshold ;
configuring the second standby virtual machine corresponding to the migrated first standby virtual machine as a replacement for the first standby virtual machine ;
and creating a copy of the reconfigured second standby virtual machine as third standby virtual machine .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20020002636A1

Filed: 2001-04-16     Issued: 2002-01-03

System and method for implementing application functionality within a network infrastructure

(Original Assignee) Circadence Corp     (Current Assignee) Sons Of Innovation LLC

Mark Vange, Marc Plumb, Michael Kouts, Glenn Wilson
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (selecting data) , memory usage , or input/output (I/O) access rates (selected set) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002636A1
CLAIM 9
. A system for transporting data through a network comprising : a plurality of client applications generating requests for network services ;
a plurality of network servers configured to provide services in response to received requests ;
a front-end server within the network having a first interface configured to handle request/response traffic with the client applications ;
a back-end server within the network having a first interface configured to handle request/response traffic with a selected set (access rates) of network servers ;
a communication channel through the network between the front-end web server and the back-end web server .

US20020002636A1
CLAIM 28
. The transport mechanism of claim 19 wherein the shared bandwidth channel composes data packets by selecting data (processor usage) from the plurality of data transport links .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (selecting data) tracking .
US20020002636A1
CLAIM 28
. The transport mechanism of claim 19 wherein the shared bandwidth channel composes data packets by selecting data (processor usage) from the plurality of data transport links .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20020002636A1
CLAIM 9
. A system for transporting data through a network comprising : a plurality of client applications generating requests for network services (Internet resources) ;
a plurality of network servers configured to provide services in response to received requests ;
a front-end server within the network having a first interface configured to handle request/response traffic with the client applications ;
a back-end server within the network having a first interface configured to handle request/response traffic with a selected set of network servers ;
a communication channel through the network between the front-end web server and the back-end web server .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (selecting data) , memory usage , or I/O access rates (selected set) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002636A1
CLAIM 9
. A system for transporting data through a network comprising : a plurality of client applications generating requests for network services ;
a plurality of network servers configured to provide services in response to received requests ;
a front-end server within the network having a first interface configured to handle request/response traffic with the client applications ;
a back-end server within the network having a first interface configured to handle request/response traffic with a selected set (access rates) of network servers ;
a communication channel through the network between the front-end web server and the back-end web server .

US20020002636A1
CLAIM 28
. The transport mechanism of claim 19 wherein the shared bandwidth channel composes data packets by selecting data (processor usage) from the plurality of data transport links .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (selecting data) tracking .
US20020002636A1
CLAIM 28
. The transport mechanism of claim 19 wherein the shared bandwidth channel composes data packets by selecting data (processor usage) from the plurality of data transport links .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20020002636A1
CLAIM 9
. A system for transporting data through a network comprising : a plurality of client applications generating requests for network services (Internet resources) ;
a plurality of network servers configured to provide services in response to received requests ;
a front-end server within the network having a first interface configured to handle request/response traffic with the client applications ;
a back-end server within the network having a first interface configured to handle request/response traffic with a selected set of network servers ;
a communication channel through the network between the front-end web server and the back-end web server .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (selecting data) , memory usage , I/O access rates (selected set) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002636A1
CLAIM 9
. A system for transporting data through a network comprising : a plurality of client applications generating requests for network services ;
a plurality of network servers configured to provide services in response to received requests ;
a front-end server within the network having a first interface configured to handle request/response traffic with the client applications ;
a back-end server within the network having a first interface configured to handle request/response traffic with a selected set (access rates) of network servers ;
a communication channel through the network between the front-end web server and the back-end web server .

US20020002636A1
CLAIM 28
. The transport mechanism of claim 19 wherein the shared bandwidth channel composes data packets by selecting data (processor usage) from the plurality of data transport links .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (selecting data) tracking .
US20020002636A1
CLAIM 28
. The transport mechanism of claim 19 wherein the shared bandwidth channel composes data packets by selecting data (processor usage) from the plurality of data transport links .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20020002636A1
CLAIM 9
. A system for transporting data through a network comprising : a plurality of client applications generating requests for network services (Internet resources) ;
a plurality of network servers configured to provide services in response to received requests ;
a front-end server within the network having a first interface configured to handle request/response traffic with the client applications ;
a back-end server within the network having a first interface configured to handle request/response traffic with a selected set of network servers ;
a communication channel through the network between the front-end web server and the back-end web server .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20020002611A1

Filed: 2001-04-16     Issued: 2002-01-03

System and method for shifting functionality between multiple web servers

(Original Assignee) Circadence Corp     (Current Assignee) Sons Of Innovation LLC

Mark Vange
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (network resource) , comprising : determining a consumption rate of cloud resources (network resource) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates (selected set) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002611A1
CLAIM 10
. A system for implementing a web site comprising : a first web server configured to provide a preselected set (access rates) of content and service applications in response to client requests ;
a second web server configured to provide a preselected set of content and service applications in response to requests from the first web server ;
a communication channel established between the first and second web servers , wherein the web site is implemented by delivering web pages from at least one of the first and second web servers by distributed and cooperative interaction using services and content provided by both first and second web servers .

US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (network resource) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20020002611A1
CLAIM 15
. A method for delivering customized content from one or more network services (Internet resources) to a client computer comprising the acts of : providing a plurality of network servers each providing access to a set of raw data ;
requesting the content from the network servers ;
causing the network server to incorporate the raw data into a “usuable format” ;
and delivering the “usuable format”from the network server to a client computer .

US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (network resource) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (network resource) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates (selected set) for the one or more virtual machines in a cloud computing environment (network resource) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002611A1
CLAIM 10
. A system for implementing a web site comprising : a first web server configured to provide a preselected set (access rates) of content and service applications in response to client requests ;
a second web server configured to provide a preselected set of content and service applications in response to requests from the first web server ;
a communication channel established between the first and second web servers , wherein the web site is implemented by delivering web pages from at least one of the first and second web servers by distributed and cooperative interaction using services and content provided by both first and second web servers .

US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource) using a low inter-reference recency set (LIRS) replacement scheme .
US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (network resource) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20020002611A1
CLAIM 15
. A method for delivering customized content from one or more network services (Internet resources) to a client computer comprising the acts of : providing a plurality of network servers each providing access to a set of raw data ;
requesting the content from the network servers ;
causing the network server to incorporate the raw data into a “usuable format” ;
and delivering the “usuable format”from the network server to a client computer .

US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource) using least recently used (LRU) replacement scheme .
US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (network resource) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (network resource) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (selected set) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002611A1
CLAIM 10
. A system for implementing a web site comprising : a first web server configured to provide a preselected set (access rates) of content and service applications in response to client requests ;
a second web server configured to provide a preselected set of content and service applications in response to requests from the first web server ;
a communication channel established between the first and second web servers , wherein the web site is implemented by delivering web pages from at least one of the first and second web servers by distributed and cooperative interaction using services and content provided by both first and second web servers .

US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource) using a low inter-reference recency set (LIRS) replacement scheme .
US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (network resource) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (network services) , or resources included in virtual private networks (VPNs) .
US20020002611A1
CLAIM 15
. A method for delivering customized content from one or more network services (Internet resources) to a client computer comprising the acts of : providing a plurality of network servers each providing access to a set of raw data ;
requesting the content from the network servers ;
causing the network server to incorporate the raw data into a “usuable format” ;
and delivering the “usuable format”from the network server to a client computer .

US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (network resource) using least recently used (LRU) replacement scheme .
US20020002611A1
CLAIM 16
. A system for supplying rendered information in a network environment comprising : providing a first server for accessing raw data from a data store ;
providing a second server configured to obtain the raw data from the first network resource (cloud resources, cloud computing environment, alternate cloud resources) ;
application software in the second server for transforming the raw data into a rendered format ;
and a client interface in the second server for communicating the rendered format from the second server to a client application .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20020002622A1

Filed: 2001-04-16     Issued: 2002-01-03

Method and system for redirection to arbitrary front-ends in a communication system

(Original Assignee) Circadence Corp     (Current Assignee) Sons Of Innovation LLC

Mark Vange, Glenn Wilson, Michael Kouts, Marc Plumb, Alexandr Chekhovtsov
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (network resources) , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud (first channel) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002622A1
CLAIM 1
. A system for serving web pages to a requesting software application comprising : a web site ;
a plurality of front-end servers , wherein a unique network address is assigned to each front-end server ;
a first channel (alternate cloud, alternate cloud resources) configured to support request and response communication between the software application and the web site ;
a plurality of second channels configured to support communication between each of the front-end servers and the web site ;
and a redirector server operable to select one front-end server from the plurality of front-end servers and generate a response referring the requesting software application to the selected front-end server .

US20020002622A1
CLAIM 14
. A method for redirecting a communication between a software application and a network resource over a communication network , the method comprising : causing a software application to generate a first domain name service (DNS) request over a first channel within the communication network , the first request (first resource) specifying a domain name of the network resource ;
selecting a second channel within the communication network that supports communication with the network resource ;
responding to the DNS request with a network address of a front-end machine that supports the second channel ;
and conducting subsequent communications between the software application and the network resource over the second channel .

US20020002622A1
CLAIM 25
. A system for providing network resources (cloud computing environment) from an origin server to a client comprising : a set of intermediary servers topologically dispersed throughout a network ;
an enhanced communication channel between the set of intermediary servers and the origin server ;
and a redirector receiving address resolution requests for the origin server , selecting one of the intermediary servers in response to the request , and providing a network address of the selected intermediary servers to an entity generating the address resolution request .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource (first request) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20020002622A1
CLAIM 14
. A method for redirecting a communication between a software application and a network resource over a communication network , the method comprising : causing a software application to generate a first domain name service (DNS) request over a first channel within the communication network , the first request (first resource) specifying a domain name of the network resource ;
selecting a second channel within the communication network that supports communication with the network resource ;
responding to the DNS request with a network address of a front-end machine that supports the second channel ;
and conducting subsequent communications between the software application and the network resource over the second channel .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud (first channel) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20020002622A1
CLAIM 1
. A system for serving web pages to a requesting software application comprising : a web site ;
a plurality of front-end servers , wherein a unique network address is assigned to each front-end server ;
a first channel (alternate cloud, alternate cloud resources) configured to support request and response communication between the software application and the web site ;
a plurality of second channels configured to support communication between each of the front-end servers and the web site ;
and a redirector server operable to select one front-end server from the plurality of front-end servers and generate a response referring the requesting software application to the selected front-end server .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (network resources) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (first channel) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002622A1
CLAIM 1
. A system for serving web pages to a requesting software application comprising : a web site ;
a plurality of front-end servers , wherein a unique network address is assigned to each front-end server ;
a first channel (alternate cloud, alternate cloud resources) configured to support request and response communication between the software application and the web site ;
a plurality of second channels configured to support communication between each of the front-end servers and the web site ;
and a redirector server operable to select one front-end server from the plurality of front-end servers and generate a response referring the requesting software application to the selected front-end server .

US20020002622A1
CLAIM 14
. A method for redirecting a communication between a software application and a network resource over a communication network , the method comprising : causing a software application to generate a first domain name service (DNS) request over a first channel within the communication network , the first request (first resource) specifying a domain name of the network resource ;
selecting a second channel within the communication network that supports communication with the network resource ;
responding to the DNS request with a network address of a front-end machine that supports the second channel ;
and conducting subsequent communications between the software application and the network resource over the second channel .

US20020002622A1
CLAIM 25
. A system for providing network resources (cloud computing environment) from an origin server to a client comprising : a set of intermediary servers topologically dispersed throughout a network ;
an enhanced communication channel between the set of intermediary servers and the origin server ;
and a redirector receiving address resolution requests for the origin server , selecting one of the intermediary servers in response to the request , and providing a network address of the selected intermediary servers to an entity generating the address resolution request .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud (first channel) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20020002622A1
CLAIM 1
. A system for serving web pages to a requesting software application comprising : a web site ;
a plurality of front-end servers , wherein a unique network address is assigned to each front-end server ;
a first channel (alternate cloud, alternate cloud resources) configured to support request and response communication between the software application and the web site ;
a plurality of second channels configured to support communication between each of the front-end servers and the web site ;
and a redirector server operable to select one front-end server from the plurality of front-end servers and generate a response referring the requesting software application to the selected front-end server .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (network resources) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (first request) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (first channel) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002622A1
CLAIM 1
. A system for serving web pages to a requesting software application comprising : a web site ;
a plurality of front-end servers , wherein a unique network address is assigned to each front-end server ;
a first channel (alternate cloud, alternate cloud resources) configured to support request and response communication between the software application and the web site ;
a plurality of second channels configured to support communication between each of the front-end servers and the web site ;
and a redirector server operable to select one front-end server from the plurality of front-end servers and generate a response referring the requesting software application to the selected front-end server .

US20020002622A1
CLAIM 14
. A method for redirecting a communication between a software application and a network resource over a communication network , the method comprising : causing a software application to generate a first domain name service (DNS) request over a first channel within the communication network , the first request (first resource) specifying a domain name of the network resource ;
selecting a second channel within the communication network that supports communication with the network resource ;
responding to the DNS request with a network address of a front-end machine that supports the second channel ;
and conducting subsequent communications between the software application and the network resource over the second channel .

US20020002622A1
CLAIM 25
. A system for providing network resources (cloud computing environment) from an origin server to a client comprising : a set of intermediary servers topologically dispersed throughout a network ;
an enhanced communication channel between the set of intermediary servers and the origin server ;
and a redirector receiving address resolution requests for the origin server , selecting one of the intermediary servers in response to the request , and providing a network address of the selected intermediary servers to an entity generating the address resolution request .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud (first channel) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20020002622A1
CLAIM 1
. A system for serving web pages to a requesting software application comprising : a web site ;
a plurality of front-end servers , wherein a unique network address is assigned to each front-end server ;
a first channel (alternate cloud, alternate cloud resources) configured to support request and response communication between the software application and the web site ;
a plurality of second channels configured to support communication between each of the front-end servers and the web site ;
and a redirector server operable to select one front-end server from the plurality of front-end servers and generate a response referring the requesting software application to the selected front-end server .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20020004816A1

Filed: 2001-04-16     Issued: 2002-01-10

System and method for on-network storage services

(Original Assignee) Circadence Corp     (Current Assignee) Sons Of Innovation LLC

Mark Vange, Marco Clementoni, Angela Neill
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (data storage device) environment , comprising : determining a consumption rate of cloud resources (storage system) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 6
. The system of claim 1 wherein at least some of the storage device are configured as a storage area (memory usage) network .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (attached storage) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage devices accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 4
. The system of claim 1 wherein at least some of the storage devices comprise direct attached storage (replacement scheme) for the intermediary server .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the LIRS replacement scheme (attached storage) comprises using LIRS based processor usage tracking .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage devices accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 4
. The system of claim 1 wherein at least some of the storage devices comprise direct attached storage (replacement scheme) for the intermediary server .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the LIRS replacement scheme (attached storage) comprises using LIRS based memory usage (storage area) tracking .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage devices accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 4
. The system of claim 1 wherein at least some of the storage devices comprise direct attached storage (replacement scheme) for the intermediary server .

US20020004816A1
CLAIM 6
. The system of claim 1 wherein at least some of the storage device are configured as a storage area (memory usage) network .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources , or resources included in virtual private networks (VPNs) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage devices accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 12
. A method for transferring data between networkconnected computers comprising the acts of : storing a data object at a specific location in a network-connected storage mechanism ;
transmitting a token representing the data object from a first network-connected computer to an intermediary computer ;
in the intermediary computer , using the token to identify the specific storage location (hybrid cloud) of the data object ;
causing the storage mechanism to transfer the data object to a second network-connected computer .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (storage system) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (attached storage) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage devices accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 4
. The system of claim 1 wherein at least some of the storage devices comprise direct attached storage (replacement scheme) for the intermediary server .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (data storage device) resource manager to : determine a consumption rate of cloud resources (storage system) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 6
. The system of claim 1 wherein at least some of the storage device are configured as a storage area (memory usage) network .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (data storage device) resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using a low inter-reference recency set (LIRS) replacement scheme (attached storage) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 4
. The system of claim 1 wherein at least some of the storage devices comprise direct attached storage (replacement scheme) for the intermediary server .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (data storage device) resource manager to use LIRS based processor usage tracking .
US20020004816A1
CLAIM 1
. A data storage system comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (data storage device) resource manager to use LIRS based memory usage (storage area) tracking .
US20020004816A1
CLAIM 1
. A data storage system comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 6
. The system of claim 1 wherein at least some of the storage device are configured as a storage area (memory usage) network .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources , or resources included in virtual private networks (VPNs) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage devices accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 12
. A method for transferring data between networkconnected computers comprising the acts of : storing a data object at a specific location in a network-connected storage mechanism ;
transmitting a token representing the data object from a first network-connected computer to an intermediary computer ;
in the intermediary computer , using the token to identify the specific storage location (hybrid cloud) of the data object ;
causing the storage mechanism to transfer the data object to a second network-connected computer .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (data storage device) resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using least recently used (LRU) replacement scheme (attached storage) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 4
. The system of claim 1 wherein at least some of the storage devices comprise direct attached storage (replacement scheme) for the intermediary server .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (data storage device) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (storage system) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (storage area) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 6
. The system of claim 1 wherein at least some of the storage device are configured as a storage area (memory usage) network .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (data storage device) resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using a low inter-reference recency set (LIRS) replacement scheme (attached storage) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 4
. The system of claim 1 wherein at least some of the storage devices comprise direct attached storage (replacement scheme) for the intermediary server .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (data storage device) resource manager to use LIRS based processor usage tracking .
US20020004816A1
CLAIM 1
. A data storage system comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (data storage device) resource manager to use LIRS based memory usage (storage area) tracking .
US20020004816A1
CLAIM 1
. A data storage system comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 6
. The system of claim 1 wherein at least some of the storage device are configured as a storage area (memory usage) network .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (storage system) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources , or resources included in virtual private networks (VPNs) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage devices accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 12
. A method for transferring data between networkconnected computers comprising the acts of : storing a data object at a specific location in a network-connected storage mechanism ;
transmitting a token representing the data object from a first network-connected computer to an intermediary computer ;
in the intermediary computer , using the token to identify the specific storage location (hybrid cloud) of the data object ;
causing the storage mechanism to transfer the data object to a second network-connected computer .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (data storage device) resource manager to prioritize the one or more VMs for consumption of the cloud resources (storage system) using least recently used (LRU) replacement scheme (attached storage) .
US20020004816A1
CLAIM 1
. A data storage system (cloud resources) comprising : a communication network ;
a client application coupled to the network and generating an access request for stored data , wherein the client application lacks a priori knowledge of the location of the requested data ;
an intermediary server coupled to the network to receive the request ;
one or more data storage device (cloud computing) s accessible through the intermediary server and having a plurality of data units stored at selected locations therein ;
a storage server having knowledge of the location of data units in the storage devices and having an interface for communicating with the intermediary servers ;
processes within the intermediary server responsive to a received data access request for communicating with the storage server to obtain knowledge about the location of requested data from the data in response to a received client request ;
and processes within the intermediary server for obtaining the data from the specific location and serving the data to the requesting client application .

US20020004816A1
CLAIM 4
. The system of claim 1 wherein at least some of the storage devices comprise direct attached storage (replacement scheme) for the intermediary server .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20020002602A1

Filed: 2001-04-16     Issued: 2002-01-03

System and method for serving a web site from multiple servers

(Original Assignee) Circadence Corp     (Current Assignee) Sons Of Innovation LLC

Mark Vange, Michael Rooks, Glenn Wilson, Michael Kouts
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud (first channel) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002602A1
CLAIM 9
. The web site of claim 8 further comprising means for directing request and response traffic from the first channel (alternate cloud, alternate cloud resources) to the second channel .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud (first channel) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network) , Internet resources , or resources included in virtual private networks (VPNs) .
US20020002602A1
CLAIM 9
. The web site of claim 8 further comprising means for directing request and response traffic from the first channel (alternate cloud, alternate cloud resources) to the second channel .

US20020002602A1
CLAIM 34
. A method for serving web pages in response to requests from a network client , wherein the requests specify content desired by the user of the network client , the method comprising : providing a gateway server configured to receive requests specifying content ;
providing a plurality of network servers , at least one of the network servers housing the specified content , and at least one of the network servers housing alternative content ;
in response to a received request , generating requests in the gateway server to at least one of the network servers ;
in response to the request received from the gateway server , generating requests in the at least one network server to at least one other network (hybrid cloud) server ;
serving a response from the gateway server , wherein the response includes content selected from the group consisting of specified content from the gateway server , specified content from one of the network servers , specified content from an origin server , alternate content from the gateway server , and alternate content from one of the network servers .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (first channel) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002602A1
CLAIM 9
. The web site of claim 8 further comprising means for directing request and response traffic from the first channel (alternate cloud, alternate cloud resources) to the second channel .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud (first channel) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network) , Internet resources , or resources included in virtual private networks (VPNs) .
US20020002602A1
CLAIM 9
. The web site of claim 8 further comprising means for directing request and response traffic from the first channel (alternate cloud, alternate cloud resources) to the second channel .

US20020002602A1
CLAIM 34
. A method for serving web pages in response to requests from a network client , wherein the requests specify content desired by the user of the network client , the method comprising : providing a gateway server configured to receive requests specifying content ;
providing a plurality of network servers , at least one of the network servers housing the specified content , and at least one of the network servers housing alternative content ;
in response to a received request , generating requests in the gateway server to at least one of the network servers ;
in response to the request received from the gateway server , generating requests in the at least one network server to at least one other network (hybrid cloud) server ;
serving a response from the gateway server , wherein the response includes content selected from the group consisting of specified content from the gateway server , specified content from one of the network servers , specified content from an origin server , alternate content from the gateway server , and alternate content from one of the network servers .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (first channel) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20020002602A1
CLAIM 9
. The web site of claim 8 further comprising means for directing request and response traffic from the first channel (alternate cloud, alternate cloud resources) to the second channel .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud (first channel) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network) , Internet resources , or resources included in virtual private networks (VPNs) .
US20020002602A1
CLAIM 9
. The web site of claim 8 further comprising means for directing request and response traffic from the first channel (alternate cloud, alternate cloud resources) to the second channel .

US20020002602A1
CLAIM 34
. A method for serving web pages in response to requests from a network client , wherein the requests specify content desired by the user of the network client , the method comprising : providing a gateway server configured to receive requests specifying content ;
providing a plurality of network servers , at least one of the network servers housing the specified content , and at least one of the network servers housing alternative content ;
in response to a received request , generating requests in the gateway server to at least one of the network servers ;
in response to the request received from the gateway server , generating requests in the at least one network server to at least one other network (hybrid cloud) server ;
serving a response from the gateway server , wherein the response includes content selected from the group consisting of specified content from the gateway server , specified content from one of the network servers , specified content from an origin server , alternate content from the gateway server , and alternate content from one of the network servers .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US5655120A

Filed: 1996-11-06     Issued: 1997-08-05

Method for load balancing in a multi-processor system where arising jobs are processed by a plurality of processors under real-time conditions

(Original Assignee) Siemens AG     (Current Assignee) Nokia Solutions and Networks GmbH and Co KG

Martin Witte, Joerg Oehlerich, Walter Held
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (multi-processor system) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme (said time) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US5655120A
CLAIM 1
. A method for lead balancing in a multi-processor system (memory usage) of a communication system , comprising the steps of : processing arising jobs by a plurality of processors under real-time conditions ;
calculating with each processor a load state thereof so as to evaluate an actual load state by direct recognition of a processing time being given to each processor in order to deal with tasks of the respective processor ;
informing each of all of the plurality of processors of the load states of all of the other processors within a time grid ;
dependent on an upward crossing of a specific value of a load state of a particular processor and dependent on the load states of the other processors , transferring from the particular processor at least a part of the jobs arising at it to the other processors ;
determining a value indicative of a number of the jobs to be distributed away to the other processors , and making a decision as to whether a specific pending job is to be distributed by forming a quotient of jobs previously distributed in a time interval with a plurality of all incoming jobs , comparing that quotient to said value indicative of the number of jobs to be distributed away , and when the quotient is greater than the value indicating the number of jobs to be distributed away , the specific pending job is distributed ;
distributing the transferred jobs onto the other processors in conformity with the load states of said other processors ;
and transferring from said particular processor only so many jobs until the load state of said particular processor again falls below said specific value .

US5655120A
CLAIM 6
. A method according to claim 5 including the step of storing in the particular processor in tables corresponding to said time (second resource management scheme) grid the load states received from the other processors .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (multi-processor system) tracking .
US5655120A
CLAIM 1
. A method for lead balancing in a multi-processor system (memory usage) of a communication system , comprising the steps of : processing arising jobs by a plurality of processors under real-time conditions ;
calculating with each processor a load state thereof so as to evaluate an actual load state by direct recognition of a processing time being given to each processor in order to deal with tasks of the respective processor ;
informing each of all of the plurality of processors of the load states of all of the other processors within a time grid ;
dependent on an upward crossing of a specific value of a load state of a particular processor and dependent on the load states of the other processors , transferring from the particular processor at least a part of the jobs arising at it to the other processors ;
determining a value indicative of a number of the jobs to be distributed away to the other processors , and making a decision as to whether a specific pending job is to be distributed by forming a quotient of jobs previously distributed in a time interval with a plurality of all incoming jobs , comparing that quotient to said value indicative of the number of jobs to be distributed away , and when the quotient is greater than the value indicating the number of jobs to be distributed away , the specific pending job is distributed ;
distributing the transferred jobs onto the other processors in conformity with the load states of said other processors ;
and transferring from said particular processor only so many jobs until the load state of said particular processor again falls below said specific value .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the second resource management scheme (said time) comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US5655120A
CLAIM 6
. A method according to claim 5 including the step of storing in the particular processor in tables corresponding to said time (second resource management scheme) grid the load states received from the other processors .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (multi-processor system) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme (said time) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US5655120A
CLAIM 1
. A method for lead balancing in a multi-processor system (memory usage) of a communication system , comprising the steps of : processing arising jobs by a plurality of processors under real-time conditions ;
calculating with each processor a load state thereof so as to evaluate an actual load state by direct recognition of a processing time being given to each processor in order to deal with tasks of the respective processor ;
informing each of all of the plurality of processors of the load states of all of the other processors within a time grid ;
dependent on an upward crossing of a specific value of a load state of a particular processor and dependent on the load states of the other processors , transferring from the particular processor at least a part of the jobs arising at it to the other processors ;
determining a value indicative of a number of the jobs to be distributed away to the other processors , and making a decision as to whether a specific pending job is to be distributed by forming a quotient of jobs previously distributed in a time interval with a plurality of all incoming jobs , comparing that quotient to said value indicative of the number of jobs to be distributed away , and when the quotient is greater than the value indicating the number of jobs to be distributed away , the specific pending job is distributed ;
distributing the transferred jobs onto the other processors in conformity with the load states of said other processors ;
and transferring from said particular processor only so many jobs until the load state of said particular processor again falls below said specific value .

US5655120A
CLAIM 6
. A method according to claim 5 including the step of storing in the particular processor in tables corresponding to said time (second resource management scheme) grid the load states received from the other processors .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (multi-processor system) tracking .
US5655120A
CLAIM 1
. A method for lead balancing in a multi-processor system (memory usage) of a communication system , comprising the steps of : processing arising jobs by a plurality of processors under real-time conditions ;
calculating with each processor a load state thereof so as to evaluate an actual load state by direct recognition of a processing time being given to each processor in order to deal with tasks of the respective processor ;
informing each of all of the plurality of processors of the load states of all of the other processors within a time grid ;
dependent on an upward crossing of a specific value of a load state of a particular processor and dependent on the load states of the other processors , transferring from the particular processor at least a part of the jobs arising at it to the other processors ;
determining a value indicative of a number of the jobs to be distributed away to the other processors , and making a decision as to whether a specific pending job is to be distributed by forming a quotient of jobs previously distributed in a time interval with a plurality of all incoming jobs , comparing that quotient to said value indicative of the number of jobs to be distributed away , and when the quotient is greater than the value indicating the number of jobs to be distributed away , the specific pending job is distributed ;
distributing the transferred jobs onto the other processors in conformity with the load states of said other processors ;
and transferring from said particular processor only so many jobs until the load state of said particular processor again falls below said specific value .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (specific value) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (multi-processor system) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme (said time) based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US5655120A
CLAIM 1
. A method for lead balancing in a multi-processor system (memory usage) of a communication system , comprising the steps of : processing arising jobs by a plurality of processors under real-time conditions ;
calculating with each processor a load state thereof so as to evaluate an actual load state by direct recognition of a processing time being given to each processor in order to deal with tasks of the respective processor ;
informing each of all of the plurality of processors of the load states of all of the other processors within a time grid ;
dependent on an upward crossing of a specific value (I/O access rate) of a load state of a particular processor and dependent on the load states of the other processors , transferring from the particular processor at least a part of the jobs arising at it to the other processors ;
determining a value indicative of a number of the jobs to be distributed away to the other processors , and making a decision as to whether a specific pending job is to be distributed by forming a quotient of jobs previously distributed in a time interval with a plurality of all incoming jobs , comparing that quotient to said value indicative of the number of jobs to be distributed away , and when the quotient is greater than the value indicating the number of jobs to be distributed away , the specific pending job is distributed ;
distributing the transferred jobs onto the other processors in conformity with the load states of said other processors ;
and transferring from said particular processor only so many jobs until the load state of said particular processor again falls below said specific value .

US5655120A
CLAIM 6
. A method according to claim 5 including the step of storing in the particular processor in tables corresponding to said time (second resource management scheme) grid the load states received from the other processors .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (multi-processor system) tracking .
US5655120A
CLAIM 1
. A method for lead balancing in a multi-processor system (memory usage) of a communication system , comprising the steps of : processing arising jobs by a plurality of processors under real-time conditions ;
calculating with each processor a load state thereof so as to evaluate an actual load state by direct recognition of a processing time being given to each processor in order to deal with tasks of the respective processor ;
informing each of all of the plurality of processors of the load states of all of the other processors within a time grid ;
dependent on an upward crossing of a specific value of a load state of a particular processor and dependent on the load states of the other processors , transferring from the particular processor at least a part of the jobs arising at it to the other processors ;
determining a value indicative of a number of the jobs to be distributed away to the other processors , and making a decision as to whether a specific pending job is to be distributed by forming a quotient of jobs previously distributed in a time interval with a plurality of all incoming jobs , comparing that quotient to said value indicative of the number of jobs to be distributed away , and when the quotient is greater than the value indicating the number of jobs to be distributed away , the specific pending job is distributed ;
distributing the transferred jobs onto the other processors in conformity with the load states of said other processors ;
and transferring from said particular processor only so many jobs until the load state of said particular processor again falls below said specific value .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20120096460A1

Filed: 2011-09-12     Issued: 2012-04-19

Apparatus and method for controlling live-migrations of a plurality of virtual machines

(Original Assignee) Fujitsu Ltd     (Current Assignee) Fujitsu Ltd

Atsuji Sekiguchi, Masazumi Matsubara, Yuji Wada, Yasuhide Matsumoto
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (first resource) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource (second resource) management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20120096460A1
CLAIM 1
. An apparatus for controlling live-migrations of a plurality of virtual machines , the apparatus comprising : a memory for storing resource-usage state information in association with each of the plurality of virtual machines , the resource-usage state information indicating a change in an amount of resources being used for providing a service ;
and a processor to : acquire , from each of the plurality of virtual machines , the resource-usage state information when a first live migration of a first virtual machine is started , store , in the memory , the acquired resource-usage state information in association with the each of the plurality of virtual machines , calculate a correlation factor indicating a degree of correlation between first resource (first resource) -usage state information for the first virtual machine and second resource (second resource) -usage state information for each of one or more target virtual machines other than the first virtual machine , using the acquired resource-usage state information for the each of the plurality of virtual machines , and execute a second live-migration on a second virtual machine that is selected from the one or more target virtual machines and has a positive correlation factor with respect to the first virtual machine , while executing the first live migration on the first virtual machine , wherein the positive correlation factor indicates a close similarity between the first resource-usage state information and the second resource-usage state information .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource (first resource) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20120096460A1
CLAIM 1
. An apparatus for controlling live-migrations of a plurality of virtual machines , the apparatus comprising : a memory for storing resource-usage state information in association with each of the plurality of virtual machines , the resource-usage state information indicating a change in an amount of resources being used for providing a service ;
and a processor to : acquire , from each of the plurality of virtual machines , the resource-usage state information when a first live migration of a first virtual machine is started , store , in the memory , the acquired resource-usage state information in association with the each of the plurality of virtual machines , calculate a correlation factor indicating a degree of correlation between first resource (first resource) -usage state information for the first virtual machine and second resource-usage state information for each of one or more target virtual machines other than the first virtual machine , using the acquired resource-usage state information for the each of the plurality of virtual machines , and execute a second live-migration on a second virtual machine that is selected from the one or more target virtual machines and has a positive correlation factor with respect to the first virtual machine , while executing the first live migration on the first virtual machine , wherein the positive correlation factor indicates a close similarity between the first resource-usage state information and the second resource-usage state information .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the second resource (second resource) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20120096460A1
CLAIM 1
. An apparatus for controlling live-migrations of a plurality of virtual machines , the apparatus comprising : a memory for storing resource-usage state information in association with each of the plurality of virtual machines , the resource-usage state information indicating a change in an amount of resources being used for providing a service ;
and a processor to : acquire , from each of the plurality of virtual machines , the resource-usage state information when a first live migration of a first virtual machine is started , store , in the memory , the acquired resource-usage state information in association with the each of the plurality of virtual machines , calculate a correlation factor indicating a degree of correlation between first resource-usage state information for the first virtual machine and second resource (second resource) -usage state information for each of one or more target virtual machines other than the first virtual machine , using the acquired resource-usage state information for the each of the plurality of virtual machines , and execute a second live-migration on a second virtual machine that is selected from the one or more target virtual machines and has a positive correlation factor with respect to the first virtual machine , while executing the first live migration on the first virtual machine , wherein the positive correlation factor indicates a close similarity between the first resource-usage state information and the second resource-usage state information .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (first resource) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource (second resource) management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20120096460A1
CLAIM 1
. An apparatus for controlling live-migrations of a plurality of virtual machines , the apparatus comprising : a memory for storing resource-usage state information in association with each of the plurality of virtual machines , the resource-usage state information indicating a change in an amount of resources being used for providing a service ;
and a processor to : acquire , from each of the plurality of virtual machines , the resource-usage state information when a first live migration of a first virtual machine is started , store , in the memory , the acquired resource-usage state information in association with the each of the plurality of virtual machines , calculate a correlation factor indicating a degree of correlation between first resource (first resource) -usage state information for the first virtual machine and second resource (second resource) -usage state information for each of one or more target virtual machines other than the first virtual machine , using the acquired resource-usage state information for the each of the plurality of virtual machines , and execute a second live-migration on a second virtual machine that is selected from the one or more target virtual machines and has a positive correlation factor with respect to the first virtual machine , while executing the first live migration on the first virtual machine , wherein the positive correlation factor indicates a close similarity between the first resource-usage state information and the second resource-usage state information .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (first resource) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource (second resource) management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20120096460A1
CLAIM 1
. An apparatus for controlling live-migrations of a plurality of virtual machines , the apparatus comprising : a memory for storing resource-usage state information in association with each of the plurality of virtual machines , the resource-usage state information indicating a change in an amount of resources being used for providing a service ;
and a processor to : acquire , from each of the plurality of virtual machines , the resource-usage state information when a first live migration of a first virtual machine is started , store , in the memory , the acquired resource-usage state information in association with the each of the plurality of virtual machines , calculate a correlation factor indicating a degree of correlation between first resource (first resource) -usage state information for the first virtual machine and second resource (second resource) -usage state information for each of one or more target virtual machines other than the first virtual machine , using the acquired resource-usage state information for the each of the plurality of virtual machines , and execute a second live-migration on a second virtual machine that is selected from the one or more target virtual machines and has a positive correlation factor with respect to the first virtual machine , while executing the first live migration on the first virtual machine , wherein the positive correlation factor indicates a close similarity between the first resource-usage state information and the second resource-usage state information .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2011031459A2

Filed: 2010-08-24     Issued: 2011-03-17

A method and apparatus for data center automation

(Original Assignee) Ntt Docomo, Inc.     

Ulas C. Kozat, Rahul Urgaonkar
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (processing speed) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources (physical server) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2011031459A2
CLAIM 1
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and local resource managers each running on said each server to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines running on said each server ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the buffer , a central resource manager to determine which server of the plurality of servers are active , wherein decisions of the central resource manager depends on backlog information per application at each of the plurality of servers and the router , and further wherein decisions regarding admission control made by the admission controller , decisions made regarding resource allocation made locally by each local resource manager in each of the plurality of servers , and decisions regarding routing of requests for an application between multiple servers by the router are decoupled from each other .

WO2011031459A2
CLAIM 3
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of servers , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and a local resource manager to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines , wherein the local resource manager chooses a resource allocation based on maximizing a sum of a product of backlogs of each application of the plurality of applications on the server and processing speed (maximum capacity) of a queue storing the backlog of the application on the server less a sum of products of the system parameter , the application priority and a power expenditure associated with the application ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the data center .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (physical server) include one or more of resources included in public cloud , resources included in community cloud (physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2011031459A2
CLAIM 1
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and local resource managers each running on said each server to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines running on said each server ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the buffer , a central resource manager to determine which server of the plurality of servers are active , wherein decisions of the central resource manager depends on backlog information per application at each of the plurality of servers and the router , and further wherein decisions regarding admission control made by the admission controller , decisions made regarding resource allocation made locally by each local resource manager in each of the plurality of servers , and decisions regarding routing of requests for an application between multiple servers by the router are decoupled from each other .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (processing speed) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources (physical server) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2011031459A2
CLAIM 1
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and local resource managers each running on said each server to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines running on said each server ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the buffer , a central resource manager to determine which server of the plurality of servers are active , wherein decisions of the central resource manager depends on backlog information per application at each of the plurality of servers and the router , and further wherein decisions regarding admission control made by the admission controller , decisions made regarding resource allocation made locally by each local resource manager in each of the plurality of servers , and decisions regarding routing of requests for an application between multiple servers by the router are decoupled from each other .

WO2011031459A2
CLAIM 3
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of servers , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and a local resource manager to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines , wherein the local resource manager chooses a resource allocation based on maximizing a sum of a product of backlogs of each application of the plurality of applications on the server and processing speed (maximum capacity) of a queue storing the backlog of the application on the server less a sum of products of the system parameter , the application priority and a power expenditure associated with the application ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the data center .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (physical server) include one or more of resources included in public cloud , resources included in community cloud (physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2011031459A2
CLAIM 1
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and local resource managers each running on said each server to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines running on said each server ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the buffer , a central resource manager to determine which server of the plurality of servers are active , wherein decisions of the central resource manager depends on backlog information per application at each of the plurality of servers and the router , and further wherein decisions regarding admission control made by the admission controller , decisions made regarding resource allocation made locally by each local resource manager in each of the plurality of servers , and decisions regarding routing of requests for an application between multiple servers by the router are decoupled from each other .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (processing speed) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources (physical server) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2011031459A2
CLAIM 1
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and local resource managers each running on said each server to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines running on said each server ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the buffer , a central resource manager to determine which server of the plurality of servers are active , wherein decisions of the central resource manager depends on backlog information per application at each of the plurality of servers and the router , and further wherein decisions regarding admission control made by the admission controller , decisions made regarding resource allocation made locally by each local resource manager in each of the plurality of servers , and decisions regarding routing of requests for an application between multiple servers by the router are decoupled from each other .

WO2011031459A2
CLAIM 3
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of servers , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and a local resource manager to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines , wherein the local resource manager chooses a resource allocation based on maximizing a sum of a product of backlogs of each application of the plurality of applications on the server and processing speed (maximum capacity) of a queue storing the backlog of the application on the server less a sum of products of the system parameter , the application priority and a power expenditure associated with the application ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the data center .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (physical server) include one or more of resources included in public cloud , resources included in community cloud (physical server) , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2011031459A2
CLAIM 1
. A virtualized data center architecture comprising : a buffer to receive a plurality of requests from a plurality of applications ;
a plurality of physical server (alternate cloud resources, community cloud, alternate cloud resources include one) s , wherein each server of the plurality of servers comprises one or more server resources allocable to one or more virtual machines on said each server , wherein each virtual machine handles requests for a different one of a plurality of applications , and local resource managers each running on said each server to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines running on said each server ;
a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers ;
an admission controller to determine whether to admit the plurality of requests into the buffer , a central resource manager to determine which server of the plurality of servers are active , wherein decisions of the central resource manager depends on backlog information per application at each of the plurality of servers and the router , and further wherein decisions regarding admission control made by the admission controller , decisions made regarding resource allocation made locally by each local resource manager in each of the plurality of servers , and decisions regarding routing of requests for an application between multiple servers by the router are decoupled from each other .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20100235845A1

Filed: 2010-05-24     Issued: 2010-09-16

Sub-task processor distribution scheduling

(Original Assignee) Sony Interactive Entertainment Inc     (Current Assignee) Sony Interactive Entertainment Inc

John P. Bates, Payton R. White
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (total size) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20100235845A1
CLAIM 8
. The method of claim 7 wherein the EET is calculated by a formula of the type : EET ≈ f  (x , TS , CS) BW o + ET x + RS BW i + RTT , where ET represents an execution time of all sub-tasks on one node , TS represents a total size (maximum capacity) of data which is divided among the sub-tasks , CS represents a constant sized data needed by each sub-task , RS represents a total size of output data produced by the tasks BW o , BW i respectively represent outgoing and incoming bandwidths for all nodes , and RTT represents a round-trip message time from the local node to a distributed node .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20100235845A1
CLAIM 1
. A method for processing of processor executable tasks , comprising : generating a task with a local node , wherein the local node includes one or more process (Internet resources) or units operably coupled to one or more distributed nodes ;
dividing the task into one or more sub-tasks ;
determining an optimum number of nodes x on which to process the one or more sub-tasks , wherein x is based at least partly on parameters relating to processing the sub-tasks at nodes accessible by the local node ;
and if x is greater than one , based on the value of x , making a determination as to whether to (1) execute the task at the local node with the processor unit , (2) , distribute the task among two or more local node processors , (3) distribute the task to one or more of the distributed nodes accessible to the local node over a LAN , or (4) distribute the task to one or more of the distributed nodes that are accessible to the local node over a WAN ;
and implementing (1) , (2) , (3) , or (4) with the local node according to the determination .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (total size) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20100235845A1
CLAIM 8
. The method of claim 7 wherein the EET is calculated by a formula of the type : EET ≈ f  (x , TS , CS) BW o + ET x + RS BW i + RTT , where ET represents an execution time of all sub-tasks on one node , TS represents a total size (maximum capacity) of data which is divided among the sub-tasks , CS represents a constant sized data needed by each sub-task , RS represents a total size of output data produced by the tasks BW o , BW i respectively represent outgoing and incoming bandwidths for all nodes , and RTT represents a round-trip message time from the local node to a distributed node .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20100235845A1
CLAIM 1
. A method for processing of processor executable tasks , comprising : generating a task with a local node , wherein the local node includes one or more process (Internet resources) or units operably coupled to one or more distributed nodes ;
dividing the task into one or more sub-tasks ;
determining an optimum number of nodes x on which to process the one or more sub-tasks , wherein x is based at least partly on parameters relating to processing the sub-tasks at nodes accessible by the local node ;
and if x is greater than one , based on the value of x , making a determination as to whether to (1) execute the task at the local node with the processor unit , (2) , distribute the task among two or more local node processors , (3) distribute the task to one or more of the distributed nodes accessible to the local node over a LAN , or (4) distribute the task to one or more of the distributed nodes that are accessible to the local node over a WAN ;
and implementing (1) , (2) , (3) , or (4) with the local node according to the determination .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate (output data) , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (total size) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20100235845A1
CLAIM 3
. The method of claim 2 , further comprising retrieving output data (memory consumption rate) for the sub-tasks from the x nodes .

US20100235845A1
CLAIM 8
. The method of claim 7 wherein the EET is calculated by a formula of the type : EET ≈ f  (x , TS , CS) BW o + ET x + RS BW i + RTT , where ET represents an execution time of all sub-tasks on one node , TS represents a total size (maximum capacity) of data which is divided among the sub-tasks , CS represents a constant sized data needed by each sub-task , RS represents a total size of output data produced by the tasks BW o , BW i respectively represent outgoing and incoming bandwidths for all nodes , and RTT represents a round-trip message time from the local node to a distributed node .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20100235845A1
CLAIM 1
. A method for processing of processor executable tasks , comprising : generating a task with a local node , wherein the local node includes one or more process (Internet resources) or units operably coupled to one or more distributed nodes ;
dividing the task into one or more sub-tasks ;
determining an optimum number of nodes x on which to process the one or more sub-tasks , wherein x is based at least partly on parameters relating to processing the sub-tasks at nodes accessible by the local node ;
and if x is greater than one , based on the value of x , making a determination as to whether to (1) execute the task at the local node with the processor unit , (2) , distribute the task among two or more local node processors , (3) distribute the task to one or more of the distributed nodes accessible to the local node over a LAN , or (4) distribute the task to one or more of the distributed nodes that are accessible to the local node over a WAN ;
and implementing (1) , (2) , (3) , or (4) with the local node according to the determination .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20110078691A1

Filed: 2009-09-30     Issued: 2011-03-31

Structured task hierarchy for a parallel runtime

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Huseyin S. Yildiz, Stephen H. Toub, John J. Duffy
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (current task) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors, executing code) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110078691A1
CLAIM 1
. At a computer system including one or more processors (processor usage, processor usage tracking) and system memory , a method for executing code (processor usage, processor usage tracking) in accordance with a task directed acyclic graph that divides processing into tasks , the method comprising : an act of initiating execution of a task included in the task directed acyclic graph ;
an act of spawning a plurality of concurrently executable sub-tasks during the time the task is executing , each sub-task configured to perform an indicated portion of work related to the task ;
for at least one of the plurality of concurrently executable sub-tasks : an act of attaching the sub-task as a child task of the task within the task directed acyclic graph such that the task is also the parent task of the sub-task task within the task directed acyclic graph ;
an act of grouping the sub-task with any other sub-tasks attached to the task within the context of the task ;
and an act of preventing completion of the task until the attached sub-task is complete ;
an act of a multi-core processor concurrently executing each of the plurality of concurrently executable sub-tasks to perform the indicated portions of work related to the task , each of the plurality of concurrently executable sub-tasks concurrently executed with at least one other task at the computer system ;
and upon detecting that all attached sub-tasks have completed , an act of permitting the task to complete .

US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (more processors, executing code) tracking .
US20110078691A1
CLAIM 1
. At a computer system including one or more processors (processor usage, processor usage tracking) and system memory , a method for executing code (processor usage, processor usage tracking) in accordance with a task directed acyclic graph that divides processing into tasks , the method comprising : an act of initiating execution of a task included in the task directed acyclic graph ;
an act of spawning a plurality of concurrently executable sub-tasks during the time the task is executing , each sub-task configured to perform an indicated portion of work related to the task ;
for at least one of the plurality of concurrently executable sub-tasks : an act of attaching the sub-task as a child task of the task within the task directed acyclic graph such that the task is also the parent task of the sub-task task within the task directed acyclic graph ;
an act of grouping the sub-task with any other sub-tasks attached to the task within the context of the task ;
and an act of preventing completion of the task until the attached sub-task is complete ;
an act of a multi-core processor concurrently executing each of the plurality of concurrently executable sub-tasks to perform the indicated portions of work related to the task , each of the plurality of concurrently executable sub-tasks concurrently executed with at least one other task at the computer system ;
and upon detecting that all attached sub-tasks have completed , an act of permitting the task to complete .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (current task) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors, executing code) , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110078691A1
CLAIM 1
. At a computer system including one or more processors (processor usage, processor usage tracking) and system memory , a method for executing code (processor usage, processor usage tracking) in accordance with a task directed acyclic graph that divides processing into tasks , the method comprising : an act of initiating execution of a task included in the task directed acyclic graph ;
an act of spawning a plurality of concurrently executable sub-tasks during the time the task is executing , each sub-task configured to perform an indicated portion of work related to the task ;
for at least one of the plurality of concurrently executable sub-tasks : an act of attaching the sub-task as a child task of the task within the task directed acyclic graph such that the task is also the parent task of the sub-task task within the task directed acyclic graph ;
an act of grouping the sub-task with any other sub-tasks attached to the task within the context of the task ;
and an act of preventing completion of the task until the attached sub-task is complete ;
an act of a multi-core processor concurrently executing each of the plurality of concurrently executable sub-tasks to perform the indicated portions of work related to the task , each of the plurality of concurrently executable sub-tasks concurrently executed with at least one other task at the computer system ;
and upon detecting that all attached sub-tasks have completed , an act of permitting the task to complete .

US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (current task) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (current task) resource manager to use LIRS based processor usage (more processors, executing code) tracking .
US20110078691A1
CLAIM 1
. At a computer system including one or more processors (processor usage, processor usage tracking) and system memory , a method for executing code (processor usage, processor usage tracking) in accordance with a task directed acyclic graph that divides processing into tasks , the method comprising : an act of initiating execution of a task included in the task directed acyclic graph ;
an act of spawning a plurality of concurrently executable sub-tasks during the time the task is executing , each sub-task configured to perform an indicated portion of work related to the task ;
for at least one of the plurality of concurrently executable sub-tasks : an act of attaching the sub-task as a child task of the task within the task directed acyclic graph such that the task is also the parent task of the sub-task task within the task directed acyclic graph ;
an act of grouping the sub-task with any other sub-tasks attached to the task within the context of the task ;
and an act of preventing completion of the task until the attached sub-task is complete ;
an act of a multi-core processor concurrently executing each of the plurality of concurrently executable sub-tasks to perform the indicated portions of work related to the task , each of the plurality of concurrently executable sub-tasks concurrently executed with at least one other task at the computer system ;
and upon detecting that all attached sub-tasks have completed , an act of permitting the task to complete .

US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (current task) resource manager to use LIRS based memory usage tracking .
US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (current task) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (current task) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (more processors, executing code) , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20110078691A1
CLAIM 1
. At a computer system including one or more processors (processor usage, processor usage tracking) and system memory , a method for executing code (processor usage, processor usage tracking) in accordance with a task directed acyclic graph that divides processing into tasks , the method comprising : an act of initiating execution of a task included in the task directed acyclic graph ;
an act of spawning a plurality of concurrently executable sub-tasks during the time the task is executing , each sub-task configured to perform an indicated portion of work related to the task ;
for at least one of the plurality of concurrently executable sub-tasks : an act of attaching the sub-task as a child task of the task within the task directed acyclic graph such that the task is also the parent task of the sub-task task within the task directed acyclic graph ;
an act of grouping the sub-task with any other sub-tasks attached to the task within the context of the task ;
and an act of preventing completion of the task until the attached sub-task is complete ;
an act of a multi-core processor concurrently executing each of the plurality of concurrently executable sub-tasks to perform the indicated portions of work related to the task , each of the plurality of concurrently executable sub-tasks concurrently executed with at least one other task at the computer system ;
and upon detecting that all attached sub-tasks have completed , an act of permitting the task to complete .

US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (current task) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (current task) resource manager to use LIRS based processor usage (more processors, executing code) tracking .
US20110078691A1
CLAIM 1
. At a computer system including one or more processors (processor usage, processor usage tracking) and system memory , a method for executing code (processor usage, processor usage tracking) in accordance with a task directed acyclic graph that divides processing into tasks , the method comprising : an act of initiating execution of a task included in the task directed acyclic graph ;
an act of spawning a plurality of concurrently executable sub-tasks during the time the task is executing , each sub-task configured to perform an indicated portion of work related to the task ;
for at least one of the plurality of concurrently executable sub-tasks : an act of attaching the sub-task as a child task of the task within the task directed acyclic graph such that the task is also the parent task of the sub-task task within the task directed acyclic graph ;
an act of grouping the sub-task with any other sub-tasks attached to the task within the context of the task ;
and an act of preventing completion of the task until the attached sub-task is complete ;
an act of a multi-core processor concurrently executing each of the plurality of concurrently executable sub-tasks to perform the indicated portions of work related to the task , each of the plurality of concurrently executable sub-tasks concurrently executed with at least one other task at the computer system ;
and upon detecting that all attached sub-tasks have completed , an act of permitting the task to complete .

US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (current task) resource manager to use LIRS based memory usage tracking .
US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (current task) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20110078691A1
CLAIM 14
. The method as recited in claim 12 , wherein the act of maintaining a state object comprising an act of maintain a state object that includes an exception array and one or more of the following : a current task (cloud computing) reference , a parent task pointer , a complete countdown , an exceptional child task array , and a shielded task flag .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2010039023A2

Filed: 2009-09-25     Issued: 2010-04-08

Method to assign traffic priority or bandwidth for application at the end users-device

(Original Assignee) Mimos Berhad     

Khong Neng Choong
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources (loading one) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (file size) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size (maximum capacity) with downloading time

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (loading one) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (loading one) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (loading one) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (loading one) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (loading one) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (loading one) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (file size) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size (maximum capacity) with downloading time

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (loading one) using a low inter-reference recency set (LIRS) replacement scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (loading one) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (loading one) using least recently used (LRU) replacement scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (loading one) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (file size) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size (maximum capacity) with downloading time

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (loading one) using a low inter-reference recency set (LIRS) replacement scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (loading one) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (loading one) using least recently used (LRU) replacement scheme .
WO2010039023A2
CLAIM 4
. A method according to claim I 5 wherein the kernel —level agent determines the available bandwidth according to the steps of : downloading one (cloud resources) or more test files of different sizes from a server ;
calculating the downloading time by deducting the start -time from end-time ;
and determining the throughput of the network by dividing the file size with downloading time




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20090204718A1

Filed: 2009-02-09     Issued: 2009-08-13

Using memory equivalency across compute clouds for accelerated virtual memory migration and memory de-duplication

(Original Assignee) Lawton Kevin P; Stevan Vlaovic     

Kevin P. Lawton, Stevan Vlaovic
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate (data values) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (internal memory) , memory usage (first operating) , or input/output (I/O) access rates (fewer bits) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (second number) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (carrying one) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090204718A1
CLAIM 3
. The method of claim 1 wherein generating a first memory state value representative of contents of a first region of memory comprises generating a signature having fewer bits (access rates) than necessary to represent all possible states of the first region of memory .

US20090204718A1
CLAIM 5
. The method of claim 4 wherein generating the first memory state value comprises generating a signature having a first number of bits and generating the third memory state value comprises generating a signature having a second number (first resource) of bits , the second number being larger than the first number .

US20090204718A1
CLAIM 6
. The method of claim 4 wherein generating the first and third memory state values comprises combining data values (consumption rate) within the first region of memory according to respective first and second algorithms .

US20090204718A1
CLAIM 9
. The method of claim 1 further comprising hosting a first operating (memory usage) system within the first computing system and hosting a second operating system within the second computing system .

US20090204718A1
CLAIM 17
. A system comprising : a communications network ;
a first computing system coupled to the communications network to generate a first memory state value representative of contents of a first region of internal memory (processor usage) of the first computing system and to output the first memory state value via the communications network ;
and a second computing system coupled to receive the first memory state value via the communications network and to (i) compare the first memory state value with a second memory state value representative of contents of a second region of internal memory of the second computing system and (ii) record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory values match .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one (maximum capacity) or more sequences of instructions which , when executed by one or more processing units within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource (second number) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20090204718A1
CLAIM 5
. The method of claim 4 wherein generating the first memory state value comprises generating a signature having a first number of bits and generating the third memory state value comprises generating a signature having a second number (first resource) of bits , the second number being larger than the first number .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (internal memory) tracking .
US20090204718A1
CLAIM 17
. A system comprising : a communications network ;
a first computing system coupled to the communications network to generate a first memory state value representative of contents of a first region of internal memory (processor usage) of the first computing system and to output the first memory state value via the communications network ;
and a second computing system coupled to receive the first memory state value via the communications network and to (i) compare the first memory state value with a second memory state value representative of contents of a second region of internal memory of the second computing system and (ii) record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory values match .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one or more sequences of instructions which , when executed by one or more processing unit (processor usage tracking) s within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (first operating) tracking .
US20090204718A1
CLAIM 9
. The method of claim 1 further comprising hosting a first operating (memory usage) system within the first computing system and hosting a second operating system within the second computing system .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20090204718A1
CLAIM 7
. The method of claim 1 wherein the contents of the first region of memory comprises a first plurality of data values stored within respective storage location (hybrid cloud) s , and the contents of the second region of memory comprises a second plurality of data values stored within respective storage locations , and wherein recording equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match comprises : comparing each of the first plurality of data values with a respective one of the second plurality of data values if the first and second memory state values match ;
and recording equivalency between the first and second regions of memory within the memory equivalency database if the each of the first plurality of data values matches the respective one of the second plurality of data values .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one or more sequences of instructions which , when executed by one or more process (Internet resources) ing units within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate (data values) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (internal memory) , memory usage (first operating) , or I/O access rates (fewer bits) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (second number) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (carrying one) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090204718A1
CLAIM 3
. The method of claim 1 wherein generating a first memory state value representative of contents of a first region of memory comprises generating a signature having fewer bits (access rates) than necessary to represent all possible states of the first region of memory .

US20090204718A1
CLAIM 5
. The method of claim 4 wherein generating the first memory state value comprises generating a signature having a first number of bits and generating the third memory state value comprises generating a signature having a second number (first resource) of bits , the second number being larger than the first number .

US20090204718A1
CLAIM 6
. The method of claim 4 wherein generating the first and third memory state values comprises combining data values (consumption rate) within the first region of memory according to respective first and second algorithms .

US20090204718A1
CLAIM 9
. The method of claim 1 further comprising hosting a first operating (memory usage) system within the first computing system and hosting a second operating system within the second computing system .

US20090204718A1
CLAIM 17
. A system comprising : a communications network ;
a first computing system coupled to the communications network to generate a first memory state value representative of contents of a first region of internal memory (processor usage) of the first computing system and to output the first memory state value via the communications network ;
and a second computing system coupled to receive the first memory state value via the communications network and to (i) compare the first memory state value with a second memory state value representative of contents of a second region of internal memory of the second computing system and (ii) record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory values match .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one (maximum capacity) or more sequences of instructions which , when executed by one or more processing units within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (internal memory) tracking .
US20090204718A1
CLAIM 17
. A system comprising : a communications network ;
a first computing system coupled to the communications network to generate a first memory state value representative of contents of a first region of internal memory (processor usage) of the first computing system and to output the first memory state value via the communications network ;
and a second computing system coupled to receive the first memory state value via the communications network and to (i) compare the first memory state value with a second memory state value representative of contents of a second region of internal memory of the second computing system and (ii) record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory values match .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one or more sequences of instructions which , when executed by one or more processing unit (processor usage tracking) s within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (first operating) tracking .
US20090204718A1
CLAIM 9
. The method of claim 1 further comprising hosting a first operating (memory usage) system within the first computing system and hosting a second operating system within the second computing system .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20090204718A1
CLAIM 7
. The method of claim 1 wherein the contents of the first region of memory comprises a first plurality of data values stored within respective storage location (hybrid cloud) s , and the contents of the second region of memory comprises a second plurality of data values stored within respective storage locations , and wherein recording equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match comprises : comparing each of the first plurality of data values with a respective one of the second plurality of data values if the first and second memory state values match ;
and recording equivalency between the first and second regions of memory within the memory equivalency database if the each of the first plurality of data values matches the respective one of the second plurality of data values .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one or more sequences of instructions which , when executed by one or more process (Internet resources) ing units within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate (data values) of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (second number) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (internal memory) , memory usage (first operating) , I/O access rates (fewer bits) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (carrying one) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090204718A1
CLAIM 3
. The method of claim 1 wherein generating a first memory state value representative of contents of a first region of memory comprises generating a signature having fewer bits (access rates) than necessary to represent all possible states of the first region of memory .

US20090204718A1
CLAIM 5
. The method of claim 4 wherein generating the first memory state value comprises generating a signature having a first number of bits and generating the third memory state value comprises generating a signature having a second number (first resource) of bits , the second number being larger than the first number .

US20090204718A1
CLAIM 6
. The method of claim 4 wherein generating the first and third memory state values comprises combining data values (consumption rate) within the first region of memory according to respective first and second algorithms .

US20090204718A1
CLAIM 9
. The method of claim 1 further comprising hosting a first operating (memory usage) system within the first computing system and hosting a second operating system within the second computing system .

US20090204718A1
CLAIM 17
. A system comprising : a communications network ;
a first computing system coupled to the communications network to generate a first memory state value representative of contents of a first region of internal memory (processor usage) of the first computing system and to output the first memory state value via the communications network ;
and a second computing system coupled to receive the first memory state value via the communications network and to (i) compare the first memory state value with a second memory state value representative of contents of a second region of internal memory of the second computing system and (ii) record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory values match .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one (maximum capacity) or more sequences of instructions which , when executed by one or more processing units within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (internal memory) tracking .
US20090204718A1
CLAIM 17
. A system comprising : a communications network ;
a first computing system coupled to the communications network to generate a first memory state value representative of contents of a first region of internal memory (processor usage) of the first computing system and to output the first memory state value via the communications network ;
and a second computing system coupled to receive the first memory state value via the communications network and to (i) compare the first memory state value with a second memory state value representative of contents of a second region of internal memory of the second computing system and (ii) record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory values match .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one or more sequences of instructions which , when executed by one or more processing unit (processor usage tracking) s within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (first operating) tracking .
US20090204718A1
CLAIM 9
. The method of claim 1 further comprising hosting a first operating (memory usage) system within the first computing system and hosting a second operating system within the second computing system .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20090204718A1
CLAIM 7
. The method of claim 1 wherein the contents of the first region of memory comprises a first plurality of data values stored within respective storage location (hybrid cloud) s , and the contents of the second region of memory comprises a second plurality of data values stored within respective storage locations , and wherein recording equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match comprises : comparing each of the first plurality of data values with a respective one of the second plurality of data values if the first and second memory state values match ;
and recording equivalency between the first and second regions of memory within the memory equivalency database if the each of the first plurality of data values matches the respective one of the second plurality of data values .

US20090204718A1
CLAIM 20
. A computer-readable medium carrying one or more sequences of instructions which , when executed by one or more process (Internet resources) ing units within a network of computing systems , cause the one or more processing units to : generate a first memory state value representative of contents of a first region of memory within a first computing system ;
communicate the first memory state value from the first computing system to a second computing system via the communications network ;
generate a second memory state value representative of contents of a second region of memory within the second computing system ;
comparing the first and second memory state values ;
and record equivalency between the first and second regions of memory within a memory equivalency database based , at least in part , upon whether the first and second memory state values match .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2009088749A2

Filed: 2008-12-22     Issued: 2009-07-16

Methods and system for efficient data transfer over hybrid fiber coax infrastructure

(Original Assignee) Harmonic, Inc.     

Lior Assouline, Amir Leventer
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource (selecting content) management scheme based , at least in part , on a maximum capacity (coding parameter) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2009088749A2
CLAIM 1
. A method for providing client-server data transfer over a Hybrid Fiber Coax network , comprising : interfacing , at a client , a video channel ;
intercepting a content request made from an end-user computing device ;
notifying a server of a relevant intercepted message via one of using an interactive channel and tagging the request ;
selecting content (second resource) sent by the server over the video channel ;
processing the content selected so as to return it to its IP traffic format ;
and forwarding the content in its IP traffic format to the end-user computing device .

WO2009088749A2
CLAIM 32
. The method of Claim 20 , further comprising : preprocessing the content if it is of video format ;
and generating multiple versions of the content having at least one of different resolutions , frame rates , bit rate and other video encoding parameter (maximum capacity) s .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the second resource (selecting content) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
WO2009088749A2
CLAIM 1
. A method for providing client-server data transfer over a Hybrid Fiber Coax network , comprising : interfacing , at a client , a video channel ;
intercepting a content request made from an end-user computing device ;
notifying a server of a relevant intercepted message via one of using an interactive channel and tagging the request ;
selecting content (second resource) sent by the server over the video channel ;
processing the content selected so as to return it to its IP traffic format ;
and forwarding the content in its IP traffic format to the end-user computing device .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource (selecting content) management scheme based , at least in part , on a maximum capacity (coding parameter) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2009088749A2
CLAIM 1
. A method for providing client-server data transfer over a Hybrid Fiber Coax network , comprising : interfacing , at a client , a video channel ;
intercepting a content request made from an end-user computing device ;
notifying a server of a relevant intercepted message via one of using an interactive channel and tagging the request ;
selecting content (second resource) sent by the server over the video channel ;
processing the content selected so as to return it to its IP traffic format ;
and forwarding the content in its IP traffic format to the end-user computing device .

WO2009088749A2
CLAIM 32
. The method of Claim 20 , further comprising : preprocessing the content if it is of video format ;
and generating multiple versions of the content having at least one of different resolutions , frame rates , bit rate and other video encoding parameter (maximum capacity) s .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource (selecting content) management scheme based , at least in part , on a maximum capacity (coding parameter) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2009088749A2
CLAIM 1
. A method for providing client-server data transfer over a Hybrid Fiber Coax network , comprising : interfacing , at a client , a video channel ;
intercepting a content request made from an end-user computing device ;
notifying a server of a relevant intercepted message via one of using an interactive channel and tagging the request ;
selecting content (second resource) sent by the server over the video channel ;
processing the content selected so as to return it to its IP traffic format ;
and forwarding the content in its IP traffic format to the end-user computing device .

WO2009088749A2
CLAIM 32
. The method of Claim 20 , further comprising : preprocessing the content if it is of video format ;
and generating multiple versions of the content having at least one of different resolutions , frame rates , bit rate and other video encoding parameter (maximum capacity) s .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
EP2065804A1

Filed: 2008-08-06     Issued: 2009-06-03

Virtual machine monitor and multi-processor system

(Original Assignee) Hitachi Ltd     (Current Assignee) Hitachi Ltd

Keitaro c/o Hitachi Ltd Intellectual Property Group Uehara, Yuji c/o Hitachi Ltd Intellectual Property Group Tsushima
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors (processor usage, processor usage tracking) , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving portion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (receiving port) .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving port (replacement scheme) ion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme (receiving port) comprises using LIRS based processor usage (more processors) tracking .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors (processor usage, processor usage tracking) , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving port (replacement scheme) ion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme (receiving port) comprises using LIRS based memory usage tracking .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving port (replacement scheme) ion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (receiving port) .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving port (replacement scheme) ion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors) , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors (processor usage, processor usage tracking) , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving portion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (receiving port) .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving port (replacement scheme) ion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (more processors) tracking .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors (processor usage, processor usage tracking) , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving portion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (receiving port) .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving port (replacement scheme) ion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (more processors) , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors (processor usage, processor usage tracking) , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving portion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (receiving port) .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving port (replacement scheme) ion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (more processors) tracking .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors (processor usage, processor usage tracking) , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving portion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (receiving port) .
EP2065804A1
CLAIM 1
A virtual machine monitor for operating a virtual server on a multiprocessor system connecting one or more processors , one or more memories , and one or more I/O devices by an internal network , the virtual machine monitor comprising : a physical hardware information acquiring portion for acquiring information of constituting a hardware including physical position information of the hardware including the processor , the memory , the I/O device , and the network of the multiprocessor system ;
a receiving port (replacement scheme) ion for receiving a forming request including a number of the processors of the formed virtual server , an amount of the memory , and a policy of allocating the I/O device and a resource ;
and an allocation processing portion for allocating the processor and the memory to the virtual server to satisfy the allocation policy after allocating the I/O device to the virtual server based on the received forming request .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20090222654A1

Filed: 2008-07-22     Issued: 2009-09-03

Distribution of tasks among asymmetric processing elements

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

Herbert Hum, Eric Sprangle, Doug Carmean, Rajesh Kumar
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate (low power state) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090222654A1
CLAIM 2
. The apparatus of claim 1 , further including an interrupt controller to signal the logic to cause the first processing element to transition to the low power state (consumption rate, memory consumption rate) and the second processing element to transition to the operating power state .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate (low power state) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090222654A1
CLAIM 2
. The apparatus of claim 1 , further including an interrupt controller to signal the logic to cause the first processing element to transition to the low power state (consumption rate, memory consumption rate) and the second processing element to transition to the operating power state .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate (low power state) of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (processing elements) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090222654A1
CLAIM 1
. An apparatus comprising : at least two asymmetric processing elements (I/O access rate) having different maximum performance capabilities and different power consumption properties ;
logic to cause a first of the at least two asymmetric processing elements to transition to a low-power state and to cause a second of the at least to asymmetric cores to transition to an operating power state in response to an interrupt signal .

US20090222654A1
CLAIM 2
. The apparatus of claim 1 , further including an interrupt controller to signal the logic to cause the first processing element to transition to the low power state (consumption rate, memory consumption rate) and the second processing element to transition to the operating power state .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2008127622A2

Filed: 2008-04-09     Issued: 2008-10-23

Data parallel computing on multiple processors

(Original Assignee) Apple Inc.     

Aaftab Munshi, Jeremy Sandmel
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit, processor usage) , memory usage (memory usage) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008127622A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (central processing unit, processor usage) tracking .
WO2008127622A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (memory usage) tracking .
WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage level .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit, processor usage) , memory usage (memory usage) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008127622A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit, processor usage) tracking .
WO2008127622A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (memory usage) tracking .
WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage level .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (central processing unit, processor usage) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (central processing unit, processor usage) , memory usage (memory usage) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008127622A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit, processor usage) tracking .
WO2008127622A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (memory usage) tracking .
WO2008127622A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage level .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2008127604A2

Filed: 2008-04-09     Issued: 2008-10-23

Shared stream memory on multiple processors

(Original Assignee) Apple Inc.     

Aaftab Munshi, Jeremy Sandmel
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (allocating memory) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008127604A2
CLAIM 1
. A computer implemented method comprising : accessing memory capability of one of a plurality of processing units for a memory capability requirement , the plurality of processing units including central processing unit (processor usage, I/O access rate) s (CPUs) and graphics processing units (GPUs) ;
loading an executable compiled from a program source to be executed in the one of a plurality of processing units coupled with a stream memory according to the memory capability requirement ;
and allocating memory (first resource) location for a variable in the program source by the executable according to the memory capability .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource (allocating memory) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
WO2008127604A2
CLAIM 1
. A computer implemented method comprising : accessing memory capability of one of a plurality of processing units for a memory capability requirement , the plurality of processing units including central processing units (CPUs) and graphics processing units (GPUs) ;
loading an executable compiled from a program source to be executed in the one of a plurality of processing units coupled with a stream memory according to the memory capability requirement ;
and allocating memory (first resource) location for a variable in the program source by the executable according to the memory capability .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (central processing unit) tracking .
WO2008127604A2
CLAIM 1
. A computer implemented method comprising : accessing memory capability of one of a plurality of processing units for a memory capability requirement , the plurality of processing units including central processing unit (processor usage, I/O access rate) s (CPUs) and graphics processing units (GPUs) ;
loading an executable compiled from a program source to be executed in the one of a plurality of processing units coupled with a stream memory according to the memory capability requirement ;
and allocating memory location for a variable in the program source by the executable according to the memory capability .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
WO2008127604A2
CLAIM 25
. A parallel computing architecture comprising : a host processor ;
one or more process (Internet resources) ing units coupled to the host processor ;
and a memory coupled to the host processor and the one or more processing units , an executable being loaded from the host processor to one of the one or more processing units to execute in a plurality of threads in parallel , the one of the one or more processing units having a memory capability , the executable having a local variable , wherein allocation of a memory location in the memory for the local variable is based on the memory capability .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit) , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (allocating memory) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008127604A2
CLAIM 1
. A computer implemented method comprising : accessing memory capability of one of a plurality of processing units for a memory capability requirement , the plurality of processing units including central processing unit (processor usage, I/O access rate) s (CPUs) and graphics processing units (GPUs) ;
loading an executable compiled from a program source to be executed in the one of a plurality of processing units coupled with a stream memory according to the memory capability requirement ;
and allocating memory (first resource) location for a variable in the program source by the executable according to the memory capability .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit) tracking .
WO2008127604A2
CLAIM 1
. A computer implemented method comprising : accessing memory capability of one of a plurality of processing units for a memory capability requirement , the plurality of processing units including central processing unit (processor usage, I/O access rate) s (CPUs) and graphics processing units (GPUs) ;
loading an executable compiled from a program source to be executed in the one of a plurality of processing units coupled with a stream memory according to the memory capability requirement ;
and allocating memory location for a variable in the program source by the executable according to the memory capability .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
WO2008127604A2
CLAIM 25
. A parallel computing architecture comprising : a host processor ;
one or more process (Internet resources) ing units coupled to the host processor ;
and a memory coupled to the host processor and the one or more processing units , an executable being loaded from the host processor to one of the one or more processing units to execute in a plurality of threads in parallel , the one of the one or more processing units having a memory capability , the executable having a local variable , wherein allocation of a memory location in the memory for the local variable is based on the memory capability .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (central processing unit) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (allocating memory) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (central processing unit) , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008127604A2
CLAIM 1
. A computer implemented method comprising : accessing memory capability of one of a plurality of processing units for a memory capability requirement , the plurality of processing units including central processing unit (processor usage, I/O access rate) s (CPUs) and graphics processing units (GPUs) ;
loading an executable compiled from a program source to be executed in the one of a plurality of processing units coupled with a stream memory according to the memory capability requirement ;
and allocating memory (first resource) location for a variable in the program source by the executable according to the memory capability .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit) tracking .
WO2008127604A2
CLAIM 1
. A computer implemented method comprising : accessing memory capability of one of a plurality of processing units for a memory capability requirement , the plurality of processing units including central processing unit (processor usage, I/O access rate) s (CPUs) and graphics processing units (GPUs) ;
loading an executable compiled from a program source to be executed in the one of a plurality of processing units coupled with a stream memory according to the memory capability requirement ;
and allocating memory location for a variable in the program source by the executable according to the memory capability .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
WO2008127604A2
CLAIM 25
. A parallel computing architecture comprising : a host processor ;
one or more process (Internet resources) ing units coupled to the host processor ;
and a memory coupled to the host processor and the one or more processing units , an executable being loaded from the host processor to one of the one or more processing units to execute in a plurality of threads in parallel , the one of the one or more processing units having a memory capability , the executable having a local variable , wherein allocation of a memory location in the memory for the local variable is based on the memory capability .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
EP2135163A2

Filed: 2008-04-09     Issued: 2009-12-23

Data parallel computing on multiple processors

(Original Assignee) Apple Inc     (Current Assignee) Apple Inc

Aaftab Munshi, Jeremy Sandmel
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit, processor usage) , memory usage (memory usage) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2135163A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (central processing unit, processor usage) tracking .
EP2135163A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (memory usage) tracking .
EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage level .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit, processor usage) , memory usage (memory usage) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2135163A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit, processor usage) tracking .
EP2135163A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (memory usage) tracking .
EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage level .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (central processing unit, processor usage) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (central processing unit, processor usage) , memory usage (memory usage) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2135163A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit, processor usage) tracking .
EP2135163A2
CLAIM 2
. The method of claim 1 , wherein the physical compute devices include one or more central processing unit (processor usage, I/O access rate) s (CPUs) or graphics processing units (GPUs) .

EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage level , or a processor usage (processor usage, I/O access rate) level .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (memory usage) tracking .
EP2135163A2
CLAIM 11
. The method of claim 10 , wherein the execution statues includes number of running threads , a local memory usage (memory usage) level , or a processor usage level .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
EP2012233A2

Filed: 2008-03-28     Issued: 2009-01-07

System and method to optimize OS scheduling decisions for power savings based on temporal charcteristics of the scheduled entity and system workload

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

Russell J. Fenger, Leena K. Puthiyedath
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (shared resources) environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions (other threads) that , when executed by one or more processors , operatively enable a cloud computing (shared resources) resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2012233A2
CLAIM 6
The system as recited in claim 5 , wherein selection of the processor also depends on cache utilization , cache sharing , cooperative cache sharing , and bus utilization by other threads (therein instructions) on the plurality of logical processors .

EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based processor usage tracking .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based memory usage tracking .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (shared resources) environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions (other threads) that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (shared resources) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP2012233A2
CLAIM 6
The system as recited in claim 5 , wherein selection of the processor also depends on cache utilization , cache sharing , cooperative cache sharing , and bus utilization by other threads (therein instructions) on the plurality of logical processors .

EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based processor usage tracking .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to use LIRS based memory usage tracking .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (shared resources) resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
EP2012233A2
CLAIM 14
The method as recited in claim 12 , wherein the performance schedule policy uses information related to cache utilization , cache sharing , cooperative cache sharing , shared resources (cloud computing, cloud computing resource manager, I/O access rate) among threads , thread priority , and bus utilization by other threads in determining on which logical processor to run the ready thread .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2008093066A2

Filed: 2008-01-28     Issued: 2008-08-07

Immediate ready implementation of virtually congestion free guaranteed service capable network : nextgentcp/ftp/udp intermediate buffer cyclical sack re-use

(Original Assignee) Bob Tang     

Bob Tang
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (bandwidth requirements) , or input/output (I/O) access rates (end user) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (destination nodes) , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008093066A2
CLAIM 3
. Methods for virtually congestion free guaranteed service capable data communications network/ Internet/ Internet subsets/ Proprietary Internet segment/WAN/LAN [ hereinafter refers to as network] with any combinations/ subsets of features (a) to(f) : (a) where all packets/data units sent from a source within the network arriving at a destination within the network all arrive without a single packet being dropped due to network congestions . (b) applies only to all packets/ data units requiring guaranteed service capability . (c) where the packet/ data unit traffics are intercepted and processed before being forwarded onwards . (d) where the sending source/ sources traffics are intercepted processed and forwarded onwards , and/or the packet/ data unit traffics are only intercepted processed and forwarded onwards at the originating sending source/ sources . (e) where the existing TCP/IP stack at sending source and/or receiving destination is/are modified to achieve the same end-to-end performance results between any source-destination nodes (I/O access rates) pair within the network , without requiring use of existing QoS/MPLS techniques nor requiring any of the switches/routers softwares within the network to be modified or contribute to achieving the end- to-end performance results nor requiring provision of unlimited bandwidths at each and every inter-node links within the network . (f) in which traffics in said network comprises mostly of TCP traffics , and other traffics types such as UDP/ICMP . . . etc do not exceed , or the applications generating other traffics types are arranged not to exceed , the whole available bandwidth of any of the inter- node link/s within the network at any time , where if other traffics types such as UDP/ICMP . . do exceed the whole available bandwidth of any of the inter- node link/s within the network at any time only the source-destination nodes pair traffics traversing the thus affected inter- node link/s within the network would not necessarily be virtually congestion free guaranteed service capable during this time and/or all packets/data units sent from a source within the network arriving at a destination within the network would not necessarily all arrive ie packet/s do gets dropped due to network congestions .

WO2008093066A2
CLAIM 24
. Method to overcome combined effects of remote receiver TCP' ;
s buffer size limitation & ;
high transit link' ;
s packet drop rates on throughputs achievable (such as BULK FTPs , High Energy Grids Transfer) , throughputs achievable here may be reduced many times magnitudes order smaller than actual available bottleneck bandwidth : (A) TCP SACK mechanism should be modified to have unlimited SACK BLOCKS in SACK field , so within each RTT/ each fast retransmit phase ALL missing SACK Gaps SeqNo/ SeqNo blocks could be fast retransmit requested . OR could be modified so that ALL missing SACK Gaps SeqNo/ SeqNo blocks could be contained within pre-agreed formatted packet/s' ;
data payload transmitted to sender TCP for fast retransmissions . OR existing max 3 blocks SACK mechanism could be modified so that ALL missing SACK Gaps SeqNos/ SeqNo blocks could cyclical sequentially be indicated within a number of consecutive DUPACKs (each containing progressively larger value yet unindicated missing SACK Gaps SeqNos/ SeqNo blocks) ie a necessary number of DUPACKs would be forwarded sufficiently to request all the missing SACK SeqNos/ SeqNo blocks , each DUPACK packets repeatedly uses the existing 3 SACK block fields to request as yet unrequested progressively larger SACK Gaps SeqNos/ SeqNo blocks for retransmission WITHIN same fast retransmit phase/ same RTT period . AND/ OR (B) Optional but preferable TCP be also modified to have very large (or unlimited linked list structure , size of which may be incremented dynamically allocated as & ;
when needed) receiver buffer . OR all receiver TCP buffered packets / all receiver TCP buffered ' ;
disjoint chunks' ;
should all be moved from receiver buffer into dynamic arbitrary large size allocated as needed ' ;
temporary space' ;
, while in this ' ;
temporary space' ;
awaits missing gap packets to be fast retransmit received filling the holes before forwarding onwards non-gap continuous SeqNo packets onwards to end user (access rates) application/s . OR (C) Instead of above direct TCP source code modifications , an independent ' ;
intermediate buffer' ;
intercept software can be implemented sitting between the incoming network & ;
receiver TCP to give effects to above foregoing (A) & ;
(B) , working in cooperation with earlier sender based TCP Accelerator software : . implement an unlimited linked list holding all arriving packets in well ordered SeqNo , this sits at remote PC situated between the sender TCPAccel & ;
remote receiver TCP , does all 3rd DUP ACKs processing towards sender TCP (which could even just be notifying sender TCPAccel of all gaps/ gap blocks , or unlimited normal SACK blocks) THEN forward continuous SeqNo packets to remote receiver MSTCP when packets non-disjointed) THUS remote MSTCP now appears to have unlimited TCP buffer & ;
mass drops problem now completely disappear .

WO2008093066A2
CLAIM 26
. Method to adapt various earlier described external public Internet increment deployable TCP/ UDP /DCCP/ RTSP modifications (AI : allowed inFlights scheme , with or without ' ;
intermediate buffer' ;
/ Cyclical SACK Re-use schemes to be install in all network nodes/ TCP UDP /DCCP/ RTSP sources within proprietary LAN/ WAN/ external Internet segments , providing instant guaranteed PSTN transmission qualities among all nodes or all ' ;
1 st priority' ;
traffic sources requiring guaranteed real time critical deliveries , requires additional refinements here (also assuming all , or majority of sending traffics sources' ;
protocols are so modified) : at all times (during fast retransmit phase , or normal phase) , if incoming ACK' ;
s/ DUPACAK' ;
s RTT (or OTT) > ;
min RTT (or minOTT) + specified tolerance variance eg 25ms + optionally specified additional threshold eg 50ms THEN immediately reduce AI size to AI/ (1 + latest RTT or latest OTT where appropriate - minRTT or minOTT where appropriate) THUS total AI allowed inFlights bytes from all modified traffic sources (may further assume limits total maximum aggregate peak ' ;
1 st priority' ;
eg VoIP bandwidth requirements (memory usage, memory consumption rate) at any time is always much less than available network bandwidth , also 1 st priority traffics sources could be assigned much larger specified tolerance value eg 100ms & ;
much larger additional threshold value eg 150ms) most of the times would never ever cause additional packet delivery latency more than eg 25ms + optional 50ms here BEYOND the absolute minimum uncongested RTT/ uncongested OTT : . after reduction CAI will stop forwarding UNTIL sufficient number of returning ACKs sufficiently shift sliding window' ;
s left edge , we do not want to overly continuously reduce CAI , so this should happen only if total extra buffer delays > ;
eg 25ms + 50ms . also CAI algorithm should be further modified to now not allow to ' ;
linear increment' ;
(eg previously when ACICs return late thus ' ;
linear increment' ;
only not ' ;
exponential increment' ;
) WHATSOEVER AT ANYTIME if curRTT > ;
minRTT + eg 25ms , thus enabling proprietary LAN/WAN network flows to STABILISE utilise near 100% bandwidths BUT not to cause buffer delays to grow beyond eg 25ms (allowing linear increments whenever ACK returns even if very very late would invariably cause network buffer delays to approach maximum , destroys realtime critical deliveries for 1 st priority traffics) .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (bandwidth requirements) tracking .
WO2008093066A2
CLAIM 26
. Method to adapt various earlier described external public Internet increment deployable TCP/ UDP /DCCP/ RTSP modifications (AI : allowed inFlights scheme , with or without ' ;
intermediate buffer' ;
/ Cyclical SACK Re-use schemes to be install in all network nodes/ TCP UDP /DCCP/ RTSP sources within proprietary LAN/ WAN/ external Internet segments , providing instant guaranteed PSTN transmission qualities among all nodes or all ' ;
1 st priority' ;
traffic sources requiring guaranteed real time critical deliveries , requires additional refinements here (also assuming all , or majority of sending traffics sources' ;
protocols are so modified) : at all times (during fast retransmit phase , or normal phase) , if incoming ACK' ;
s/ DUPACAK' ;
s RTT (or OTT) > ;
min RTT (or minOTT) + specified tolerance variance eg 25ms + optionally specified additional threshold eg 50ms THEN immediately reduce AI size to AI/ (1 + latest RTT or latest OTT where appropriate - minRTT or minOTT where appropriate) THUS total AI allowed inFlights bytes from all modified traffic sources (may further assume limits total maximum aggregate peak ' ;
1 st priority' ;
eg VoIP bandwidth requirements (memory usage, memory consumption rate) at any time is always much less than available network bandwidth , also 1 st priority traffics sources could be assigned much larger specified tolerance value eg 100ms & ;
much larger additional threshold value eg 150ms) most of the times would never ever cause additional packet delivery latency more than eg 25ms + optional 50ms here BEYOND the absolute minimum uncongested RTT/ uncongested OTT : . after reduction CAI will stop forwarding UNTIL sufficient number of returning ACKs sufficiently shift sliding window' ;
s left edge , we do not want to overly continuously reduce CAI , so this should happen only if total extra buffer delays > ;
eg 25ms + 50ms . also CAI algorithm should be further modified to now not allow to ' ;
linear increment' ;
(eg previously when ACICs return late thus ' ;
linear increment' ;
only not ' ;
exponential increment' ;
) WHATSOEVER AT ANYTIME if curRTT > ;
minRTT + eg 25ms , thus enabling proprietary LAN/WAN network flows to STABILISE utilise near 100% bandwidths BUT not to cause buffer delays to grow beyond eg 25ms (allowing linear increments whenever ACK returns even if very very late would invariably cause network buffer delays to approach maximum , destroys realtime critical deliveries for 1 st priority traffics) .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network) , Internet resources , or resources included in virtual private networks (VPNs) .
WO2008093066A2
CLAIM 1
. Methods for improving TCP & ;
/or TCP like protocols & ;
/or other protocols , which could be capable of Increment Deployable TCP Friendly completely implemented directly via TCP/ Protocol stack software modifications without requiring any other changes/ re-configurations of any other network (hybrid cloud) components whatsoever and which could enable immediate ready guaranteed service PSTN transmissions quality capable networks and without a single packet ever gets congestion dropped , said methods avoid & ;
/or prevent & ;
/or recover from network congestions via complete or partial ' ;
pause' ;
/ ' ;
halt' ;
in sender' ;
s data transmissions , OR algorithmic derived dynamic reduction of CWND or Allowed inFlights values to clear all traversed nodes' ;
buffered packets (or to clear certain levels of traversed nodes' ;
buffered packets) , when congestion events are detected such as congestion packet drops & ;
/or returning ACK' ;
s round trip time RTT / one way trip time OTT comes close to or exceeded certain threshold value eg known value of the flow path' ;
s uncongested RTT / OTT or their latest available best estimate min(RTT) / min(OTT) .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (bandwidth requirements) , or I/O access rates (end user) (destination nodes) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008093066A2
CLAIM 3
. Methods for virtually congestion free guaranteed service capable data communications network/ Internet/ Internet subsets/ Proprietary Internet segment/WAN/LAN [ hereinafter refers to as network] with any combinations/ subsets of features (a) to(f) : (a) where all packets/data units sent from a source within the network arriving at a destination within the network all arrive without a single packet being dropped due to network congestions . (b) applies only to all packets/ data units requiring guaranteed service capability . (c) where the packet/ data unit traffics are intercepted and processed before being forwarded onwards . (d) where the sending source/ sources traffics are intercepted processed and forwarded onwards , and/or the packet/ data unit traffics are only intercepted processed and forwarded onwards at the originating sending source/ sources . (e) where the existing TCP/IP stack at sending source and/or receiving destination is/are modified to achieve the same end-to-end performance results between any source-destination nodes (I/O access rates) pair within the network , without requiring use of existing QoS/MPLS techniques nor requiring any of the switches/routers softwares within the network to be modified or contribute to achieving the end- to-end performance results nor requiring provision of unlimited bandwidths at each and every inter-node links within the network . (f) in which traffics in said network comprises mostly of TCP traffics , and other traffics types such as UDP/ICMP . . . etc do not exceed , or the applications generating other traffics types are arranged not to exceed , the whole available bandwidth of any of the inter- node link/s within the network at any time , where if other traffics types such as UDP/ICMP . . do exceed the whole available bandwidth of any of the inter- node link/s within the network at any time only the source-destination nodes pair traffics traversing the thus affected inter- node link/s within the network would not necessarily be virtually congestion free guaranteed service capable during this time and/or all packets/data units sent from a source within the network arriving at a destination within the network would not necessarily all arrive ie packet/s do gets dropped due to network congestions .

WO2008093066A2
CLAIM 24
. Method to overcome combined effects of remote receiver TCP' ;
s buffer size limitation & ;
high transit link' ;
s packet drop rates on throughputs achievable (such as BULK FTPs , High Energy Grids Transfer) , throughputs achievable here may be reduced many times magnitudes order smaller than actual available bottleneck bandwidth : (A) TCP SACK mechanism should be modified to have unlimited SACK BLOCKS in SACK field , so within each RTT/ each fast retransmit phase ALL missing SACK Gaps SeqNo/ SeqNo blocks could be fast retransmit requested . OR could be modified so that ALL missing SACK Gaps SeqNo/ SeqNo blocks could be contained within pre-agreed formatted packet/s' ;
data payload transmitted to sender TCP for fast retransmissions . OR existing max 3 blocks SACK mechanism could be modified so that ALL missing SACK Gaps SeqNos/ SeqNo blocks could cyclical sequentially be indicated within a number of consecutive DUPACKs (each containing progressively larger value yet unindicated missing SACK Gaps SeqNos/ SeqNo blocks) ie a necessary number of DUPACKs would be forwarded sufficiently to request all the missing SACK SeqNos/ SeqNo blocks , each DUPACK packets repeatedly uses the existing 3 SACK block fields to request as yet unrequested progressively larger SACK Gaps SeqNos/ SeqNo blocks for retransmission WITHIN same fast retransmit phase/ same RTT period . AND/ OR (B) Optional but preferable TCP be also modified to have very large (or unlimited linked list structure , size of which may be incremented dynamically allocated as & ;
when needed) receiver buffer . OR all receiver TCP buffered packets / all receiver TCP buffered ' ;
disjoint chunks' ;
should all be moved from receiver buffer into dynamic arbitrary large size allocated as needed ' ;
temporary space' ;
, while in this ' ;
temporary space' ;
awaits missing gap packets to be fast retransmit received filling the holes before forwarding onwards non-gap continuous SeqNo packets onwards to end user (access rates) application/s . OR (C) Instead of above direct TCP source code modifications , an independent ' ;
intermediate buffer' ;
intercept software can be implemented sitting between the incoming network & ;
receiver TCP to give effects to above foregoing (A) & ;
(B) , working in cooperation with earlier sender based TCP Accelerator software : . implement an unlimited linked list holding all arriving packets in well ordered SeqNo , this sits at remote PC situated between the sender TCPAccel & ;
remote receiver TCP , does all 3rd DUP ACKs processing towards sender TCP (which could even just be notifying sender TCPAccel of all gaps/ gap blocks , or unlimited normal SACK blocks) THEN forward continuous SeqNo packets to remote receiver MSTCP when packets non-disjointed) THUS remote MSTCP now appears to have unlimited TCP buffer & ;
mass drops problem now completely disappear .

WO2008093066A2
CLAIM 26
. Method to adapt various earlier described external public Internet increment deployable TCP/ UDP /DCCP/ RTSP modifications (AI : allowed inFlights scheme , with or without ' ;
intermediate buffer' ;
/ Cyclical SACK Re-use schemes to be install in all network nodes/ TCP UDP /DCCP/ RTSP sources within proprietary LAN/ WAN/ external Internet segments , providing instant guaranteed PSTN transmission qualities among all nodes or all ' ;
1 st priority' ;
traffic sources requiring guaranteed real time critical deliveries , requires additional refinements here (also assuming all , or majority of sending traffics sources' ;
protocols are so modified) : at all times (during fast retransmit phase , or normal phase) , if incoming ACK' ;
s/ DUPACAK' ;
s RTT (or OTT) > ;
min RTT (or minOTT) + specified tolerance variance eg 25ms + optionally specified additional threshold eg 50ms THEN immediately reduce AI size to AI/ (1 + latest RTT or latest OTT where appropriate - minRTT or minOTT where appropriate) THUS total AI allowed inFlights bytes from all modified traffic sources (may further assume limits total maximum aggregate peak ' ;
1 st priority' ;
eg VoIP bandwidth requirements (memory usage, memory consumption rate) at any time is always much less than available network bandwidth , also 1 st priority traffics sources could be assigned much larger specified tolerance value eg 100ms & ;
much larger additional threshold value eg 150ms) most of the times would never ever cause additional packet delivery latency more than eg 25ms + optional 50ms here BEYOND the absolute minimum uncongested RTT/ uncongested OTT : . after reduction CAI will stop forwarding UNTIL sufficient number of returning ACKs sufficiently shift sliding window' ;
s left edge , we do not want to overly continuously reduce CAI , so this should happen only if total extra buffer delays > ;
eg 25ms + 50ms . also CAI algorithm should be further modified to now not allow to ' ;
linear increment' ;
(eg previously when ACICs return late thus ' ;
linear increment' ;
only not ' ;
exponential increment' ;
) WHATSOEVER AT ANYTIME if curRTT > ;
minRTT + eg 25ms , thus enabling proprietary LAN/WAN network flows to STABILISE utilise near 100% bandwidths BUT not to cause buffer delays to grow beyond eg 25ms (allowing linear increments whenever ACK returns even if very very late would invariably cause network buffer delays to approach maximum , destroys realtime critical deliveries for 1 st priority traffics) .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (bandwidth requirements) tracking .
WO2008093066A2
CLAIM 26
. Method to adapt various earlier described external public Internet increment deployable TCP/ UDP /DCCP/ RTSP modifications (AI : allowed inFlights scheme , with or without ' ;
intermediate buffer' ;
/ Cyclical SACK Re-use schemes to be install in all network nodes/ TCP UDP /DCCP/ RTSP sources within proprietary LAN/ WAN/ external Internet segments , providing instant guaranteed PSTN transmission qualities among all nodes or all ' ;
1 st priority' ;
traffic sources requiring guaranteed real time critical deliveries , requires additional refinements here (also assuming all , or majority of sending traffics sources' ;
protocols are so modified) : at all times (during fast retransmit phase , or normal phase) , if incoming ACK' ;
s/ DUPACAK' ;
s RTT (or OTT) > ;
min RTT (or minOTT) + specified tolerance variance eg 25ms + optionally specified additional threshold eg 50ms THEN immediately reduce AI size to AI/ (1 + latest RTT or latest OTT where appropriate - minRTT or minOTT where appropriate) THUS total AI allowed inFlights bytes from all modified traffic sources (may further assume limits total maximum aggregate peak ' ;
1 st priority' ;
eg VoIP bandwidth requirements (memory usage, memory consumption rate) at any time is always much less than available network bandwidth , also 1 st priority traffics sources could be assigned much larger specified tolerance value eg 100ms & ;
much larger additional threshold value eg 150ms) most of the times would never ever cause additional packet delivery latency more than eg 25ms + optional 50ms here BEYOND the absolute minimum uncongested RTT/ uncongested OTT : . after reduction CAI will stop forwarding UNTIL sufficient number of returning ACKs sufficiently shift sliding window' ;
s left edge , we do not want to overly continuously reduce CAI , so this should happen only if total extra buffer delays > ;
eg 25ms + 50ms . also CAI algorithm should be further modified to now not allow to ' ;
linear increment' ;
(eg previously when ACICs return late thus ' ;
linear increment' ;
only not ' ;
exponential increment' ;
) WHATSOEVER AT ANYTIME if curRTT > ;
minRTT + eg 25ms , thus enabling proprietary LAN/WAN network flows to STABILISE utilise near 100% bandwidths BUT not to cause buffer delays to grow beyond eg 25ms (allowing linear increments whenever ACK returns even if very very late would invariably cause network buffer delays to approach maximum , destroys realtime critical deliveries for 1 st priority traffics) .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network) , Internet resources , or resources included in virtual private networks (VPNs) .
WO2008093066A2
CLAIM 1
. Methods for improving TCP & ;
/or TCP like protocols & ;
/or other protocols , which could be capable of Increment Deployable TCP Friendly completely implemented directly via TCP/ Protocol stack software modifications without requiring any other changes/ re-configurations of any other network (hybrid cloud) components whatsoever and which could enable immediate ready guaranteed service PSTN transmissions quality capable networks and without a single packet ever gets congestion dropped , said methods avoid & ;
/or prevent & ;
/or recover from network congestions via complete or partial ' ;
pause' ;
/ ' ;
halt' ;
in sender' ;
s data transmissions , OR algorithmic derived dynamic reduction of CWND or Allowed inFlights values to clear all traversed nodes' ;
buffered packets (or to clear certain levels of traversed nodes' ;
buffered packets) , when congestion events are detected such as congestion packet drops & ;
/or returning ACK' ;
s round trip time RTT / one way trip time OTT comes close to or exceeded certain threshold value eg known value of the flow path' ;
s uncongested RTT / OTT or their latest available best estimate min(RTT) / min(OTT) .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate (bandwidth requirements) , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (bandwidth requirements) , I/O access rates (end user) (destination nodes) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
WO2008093066A2
CLAIM 3
. Methods for virtually congestion free guaranteed service capable data communications network/ Internet/ Internet subsets/ Proprietary Internet segment/WAN/LAN [ hereinafter refers to as network] with any combinations/ subsets of features (a) to(f) : (a) where all packets/data units sent from a source within the network arriving at a destination within the network all arrive without a single packet being dropped due to network congestions . (b) applies only to all packets/ data units requiring guaranteed service capability . (c) where the packet/ data unit traffics are intercepted and processed before being forwarded onwards . (d) where the sending source/ sources traffics are intercepted processed and forwarded onwards , and/or the packet/ data unit traffics are only intercepted processed and forwarded onwards at the originating sending source/ sources . (e) where the existing TCP/IP stack at sending source and/or receiving destination is/are modified to achieve the same end-to-end performance results between any source-destination nodes (I/O access rates) pair within the network , without requiring use of existing QoS/MPLS techniques nor requiring any of the switches/routers softwares within the network to be modified or contribute to achieving the end- to-end performance results nor requiring provision of unlimited bandwidths at each and every inter-node links within the network . (f) in which traffics in said network comprises mostly of TCP traffics , and other traffics types such as UDP/ICMP . . . etc do not exceed , or the applications generating other traffics types are arranged not to exceed , the whole available bandwidth of any of the inter- node link/s within the network at any time , where if other traffics types such as UDP/ICMP . . do exceed the whole available bandwidth of any of the inter- node link/s within the network at any time only the source-destination nodes pair traffics traversing the thus affected inter- node link/s within the network would not necessarily be virtually congestion free guaranteed service capable during this time and/or all packets/data units sent from a source within the network arriving at a destination within the network would not necessarily all arrive ie packet/s do gets dropped due to network congestions .

WO2008093066A2
CLAIM 24
. Method to overcome combined effects of remote receiver TCP' ;
s buffer size limitation & ;
high transit link' ;
s packet drop rates on throughputs achievable (such as BULK FTPs , High Energy Grids Transfer) , throughputs achievable here may be reduced many times magnitudes order smaller than actual available bottleneck bandwidth : (A) TCP SACK mechanism should be modified to have unlimited SACK BLOCKS in SACK field , so within each RTT/ each fast retransmit phase ALL missing SACK Gaps SeqNo/ SeqNo blocks could be fast retransmit requested . OR could be modified so that ALL missing SACK Gaps SeqNo/ SeqNo blocks could be contained within pre-agreed formatted packet/s' ;
data payload transmitted to sender TCP for fast retransmissions . OR existing max 3 blocks SACK mechanism could be modified so that ALL missing SACK Gaps SeqNos/ SeqNo blocks could cyclical sequentially be indicated within a number of consecutive DUPACKs (each containing progressively larger value yet unindicated missing SACK Gaps SeqNos/ SeqNo blocks) ie a necessary number of DUPACKs would be forwarded sufficiently to request all the missing SACK SeqNos/ SeqNo blocks , each DUPACK packets repeatedly uses the existing 3 SACK block fields to request as yet unrequested progressively larger SACK Gaps SeqNos/ SeqNo blocks for retransmission WITHIN same fast retransmit phase/ same RTT period . AND/ OR (B) Optional but preferable TCP be also modified to have very large (or unlimited linked list structure , size of which may be incremented dynamically allocated as & ;
when needed) receiver buffer . OR all receiver TCP buffered packets / all receiver TCP buffered ' ;
disjoint chunks' ;
should all be moved from receiver buffer into dynamic arbitrary large size allocated as needed ' ;
temporary space' ;
, while in this ' ;
temporary space' ;
awaits missing gap packets to be fast retransmit received filling the holes before forwarding onwards non-gap continuous SeqNo packets onwards to end user (access rates) application/s . OR (C) Instead of above direct TCP source code modifications , an independent ' ;
intermediate buffer' ;
intercept software can be implemented sitting between the incoming network & ;
receiver TCP to give effects to above foregoing (A) & ;
(B) , working in cooperation with earlier sender based TCP Accelerator software : . implement an unlimited linked list holding all arriving packets in well ordered SeqNo , this sits at remote PC situated between the sender TCPAccel & ;
remote receiver TCP , does all 3rd DUP ACKs processing towards sender TCP (which could even just be notifying sender TCPAccel of all gaps/ gap blocks , or unlimited normal SACK blocks) THEN forward continuous SeqNo packets to remote receiver MSTCP when packets non-disjointed) THUS remote MSTCP now appears to have unlimited TCP buffer & ;
mass drops problem now completely disappear .

WO2008093066A2
CLAIM 26
. Method to adapt various earlier described external public Internet increment deployable TCP/ UDP /DCCP/ RTSP modifications (AI : allowed inFlights scheme , with or without ' ;
intermediate buffer' ;
/ Cyclical SACK Re-use schemes to be install in all network nodes/ TCP UDP /DCCP/ RTSP sources within proprietary LAN/ WAN/ external Internet segments , providing instant guaranteed PSTN transmission qualities among all nodes or all ' ;
1 st priority' ;
traffic sources requiring guaranteed real time critical deliveries , requires additional refinements here (also assuming all , or majority of sending traffics sources' ;
protocols are so modified) : at all times (during fast retransmit phase , or normal phase) , if incoming ACK' ;
s/ DUPACAK' ;
s RTT (or OTT) > ;
min RTT (or minOTT) + specified tolerance variance eg 25ms + optionally specified additional threshold eg 50ms THEN immediately reduce AI size to AI/ (1 + latest RTT or latest OTT where appropriate - minRTT or minOTT where appropriate) THUS total AI allowed inFlights bytes from all modified traffic sources (may further assume limits total maximum aggregate peak ' ;
1 st priority' ;
eg VoIP bandwidth requirements (memory usage, memory consumption rate) at any time is always much less than available network bandwidth , also 1 st priority traffics sources could be assigned much larger specified tolerance value eg 100ms & ;
much larger additional threshold value eg 150ms) most of the times would never ever cause additional packet delivery latency more than eg 25ms + optional 50ms here BEYOND the absolute minimum uncongested RTT/ uncongested OTT : . after reduction CAI will stop forwarding UNTIL sufficient number of returning ACKs sufficiently shift sliding window' ;
s left edge , we do not want to overly continuously reduce CAI , so this should happen only if total extra buffer delays > ;
eg 25ms + 50ms . also CAI algorithm should be further modified to now not allow to ' ;
linear increment' ;
(eg previously when ACICs return late thus ' ;
linear increment' ;
only not ' ;
exponential increment' ;
) WHATSOEVER AT ANYTIME if curRTT > ;
minRTT + eg 25ms , thus enabling proprietary LAN/WAN network flows to STABILISE utilise near 100% bandwidths BUT not to cause buffer delays to grow beyond eg 25ms (allowing linear increments whenever ACK returns even if very very late would invariably cause network buffer delays to approach maximum , destroys realtime critical deliveries for 1 st priority traffics) .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (bandwidth requirements) tracking .
WO2008093066A2
CLAIM 26
. Method to adapt various earlier described external public Internet increment deployable TCP/ UDP /DCCP/ RTSP modifications (AI : allowed inFlights scheme , with or without ' ;
intermediate buffer' ;
/ Cyclical SACK Re-use schemes to be install in all network nodes/ TCP UDP /DCCP/ RTSP sources within proprietary LAN/ WAN/ external Internet segments , providing instant guaranteed PSTN transmission qualities among all nodes or all ' ;
1 st priority' ;
traffic sources requiring guaranteed real time critical deliveries , requires additional refinements here (also assuming all , or majority of sending traffics sources' ;
protocols are so modified) : at all times (during fast retransmit phase , or normal phase) , if incoming ACK' ;
s/ DUPACAK' ;
s RTT (or OTT) > ;
min RTT (or minOTT) + specified tolerance variance eg 25ms + optionally specified additional threshold eg 50ms THEN immediately reduce AI size to AI/ (1 + latest RTT or latest OTT where appropriate - minRTT or minOTT where appropriate) THUS total AI allowed inFlights bytes from all modified traffic sources (may further assume limits total maximum aggregate peak ' ;
1 st priority' ;
eg VoIP bandwidth requirements (memory usage, memory consumption rate) at any time is always much less than available network bandwidth , also 1 st priority traffics sources could be assigned much larger specified tolerance value eg 100ms & ;
much larger additional threshold value eg 150ms) most of the times would never ever cause additional packet delivery latency more than eg 25ms + optional 50ms here BEYOND the absolute minimum uncongested RTT/ uncongested OTT : . after reduction CAI will stop forwarding UNTIL sufficient number of returning ACKs sufficiently shift sliding window' ;
s left edge , we do not want to overly continuously reduce CAI , so this should happen only if total extra buffer delays > ;
eg 25ms + 50ms . also CAI algorithm should be further modified to now not allow to ' ;
linear increment' ;
(eg previously when ACICs return late thus ' ;
linear increment' ;
only not ' ;
exponential increment' ;
) WHATSOEVER AT ANYTIME if curRTT > ;
minRTT + eg 25ms , thus enabling proprietary LAN/WAN network flows to STABILISE utilise near 100% bandwidths BUT not to cause buffer delays to grow beyond eg 25ms (allowing linear increments whenever ACK returns even if very very late would invariably cause network buffer delays to approach maximum , destroys realtime critical deliveries for 1 st priority traffics) .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network) , Internet resources , or resources included in virtual private networks (VPNs) .
WO2008093066A2
CLAIM 1
. Methods for improving TCP & ;
/or TCP like protocols & ;
/or other protocols , which could be capable of Increment Deployable TCP Friendly completely implemented directly via TCP/ Protocol stack software modifications without requiring any other changes/ re-configurations of any other network (hybrid cloud) components whatsoever and which could enable immediate ready guaranteed service PSTN transmissions quality capable networks and without a single packet ever gets congestion dropped , said methods avoid & ;
/or prevent & ;
/or recover from network congestions via complete or partial ' ;
pause' ;
/ ' ;
halt' ;
in sender' ;
s data transmissions , OR algorithmic derived dynamic reduction of CWND or Allowed inFlights values to clear all traversed nodes' ;
buffered packets (or to clear certain levels of traversed nodes' ;
buffered packets) , when congestion events are detected such as congestion packet drops & ;
/or returning ACK' ;
s round trip time RTT / one way trip time OTT comes close to or exceeded certain threshold value eg known value of the flow path' ;
s uncongested RTT / OTT or their latest available best estimate min(RTT) / min(OTT) .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20090019449A1

Filed: 2007-10-26     Issued: 2009-01-15

Load balancing method and apparatus in symmetric multi-processor system

(Original Assignee) Samsung Electronics Co Ltd     (Current Assignee) Samsung Electronics Co Ltd

Gyu-sang Choi, Chae-seok Im, Si-hwa Lee
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (multi-processor system) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (selected two) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090019449A1
CLAIM 9
. A symmetric multi-processor system (memory usage) , comprising : a plurality of processors ;
a scheduler , which selects at least two processors based on loads of the plurality of processors ;
a run queue of a first processor from among the selected two (first resource, first resource management scheme) processors , which stores tasks to be performed by the first processor ;
a run queue of a second processor , from among the selected two processors , which stores tasks to be performed by the second processor ;
and a migration queue of the second processor , which stores tasks migrated from the run queue of a processor other than the second processor , wherein the scheduler migrates a predetermined task stored in the run queue of the first processor to the migration queue of the second processor , and migrates the task stored in the migration queue of the second processor to the run queue of the second processor .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource (selected two) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20090019449A1
CLAIM 9
. A symmetric multi-processor system , comprising : a plurality of processors ;
a scheduler , which selects at least two processors based on loads of the plurality of processors ;
a run queue of a first processor from among the selected two (first resource, first resource management scheme) processors , which stores tasks to be performed by the first processor ;
a run queue of a second processor , from among the selected two processors , which stores tasks to be performed by the second processor ;
and a migration queue of the second processor , which stores tasks migrated from the run queue of a processor other than the second processor , wherein the scheduler migrates a predetermined task stored in the run queue of the first processor to the migration queue of the second processor , and migrates the task stored in the migration queue of the second processor to the run queue of the second processor .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (multi-processor system) tracking .
US20090019449A1
CLAIM 9
. A symmetric multi-processor system (memory usage) , comprising : a plurality of processors ;
a scheduler , which selects at least two processors based on loads of the plurality of processors ;
a run queue of a first processor from among the selected two processors , which stores tasks to be performed by the first processor ;
a run queue of a second processor , from among the selected two processors , which stores tasks to be performed by the second processor ;
and a migration queue of the second processor , which stores tasks migrated from the run queue of a processor other than the second processor , wherein the scheduler migrates a predetermined task stored in the run queue of the first processor to the migration queue of the second processor , and migrates the task stored in the migration queue of the second processor to the run queue of the second processor .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (multi-processor system) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (selected two) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090019449A1
CLAIM 9
. A symmetric multi-processor system (memory usage) , comprising : a plurality of processors ;
a scheduler , which selects at least two processors based on loads of the plurality of processors ;
a run queue of a first processor from among the selected two (first resource, first resource management scheme) processors , which stores tasks to be performed by the first processor ;
a run queue of a second processor , from among the selected two processors , which stores tasks to be performed by the second processor ;
and a migration queue of the second processor , which stores tasks migrated from the run queue of a processor other than the second processor , wherein the scheduler migrates a predetermined task stored in the run queue of the first processor to the migration queue of the second processor , and migrates the task stored in the migration queue of the second processor to the run queue of the second processor .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (multi-processor system) tracking .
US20090019449A1
CLAIM 9
. A symmetric multi-processor system (memory usage) , comprising : a plurality of processors ;
a scheduler , which selects at least two processors based on loads of the plurality of processors ;
a run queue of a first processor from among the selected two processors , which stores tasks to be performed by the first processor ;
a run queue of a second processor , from among the selected two processors , which stores tasks to be performed by the second processor ;
and a migration queue of the second processor , which stores tasks migrated from the run queue of a processor other than the second processor , wherein the scheduler migrates a predetermined task stored in the run queue of the first processor to the migration queue of the second processor , and migrates the task stored in the migration queue of the second processor to the run queue of the second processor .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (selected two) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (multi-processor system) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20090019449A1
CLAIM 9
. A symmetric multi-processor system (memory usage) , comprising : a plurality of processors ;
a scheduler , which selects at least two processors based on loads of the plurality of processors ;
a run queue of a first processor from among the selected two (first resource, first resource management scheme) processors , which stores tasks to be performed by the first processor ;
a run queue of a second processor , from among the selected two processors , which stores tasks to be performed by the second processor ;
and a migration queue of the second processor , which stores tasks migrated from the run queue of a processor other than the second processor , wherein the scheduler migrates a predetermined task stored in the run queue of the first processor to the migration queue of the second processor , and migrates the task stored in the migration queue of the second processor to the run queue of the second processor .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (multi-processor system) tracking .
US20090019449A1
CLAIM 9
. A symmetric multi-processor system (memory usage) , comprising : a plurality of processors ;
a scheduler , which selects at least two processors based on loads of the plurality of processors ;
a run queue of a first processor from among the selected two processors , which stores tasks to be performed by the first processor ;
a run queue of a second processor , from among the selected two processors , which stores tasks to be performed by the second processor ;
and a migration queue of the second processor , which stores tasks migrated from the run queue of a processor other than the second processor , wherein the scheduler migrates a predetermined task stored in the run queue of the first processor to the migration queue of the second processor , and migrates the task stored in the migration queue of the second processor to the run queue of the second processor .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20080022073A1

Filed: 2007-07-31     Issued: 2008-01-24

Functional-level instruction-set computer architecture for processing application-layer content-service requests such as file-access requests

(Original Assignee) Alacritech Inc     (Current Assignee) RPX Corp

Millind Mittal, Mehul Kharidia, Tarun Tripathy, J. Mertoguno
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit) , memory usage (storage area) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080022073A1
CLAIM 9
. The file server of claim 8 , wherein the input buffer includes plural storage area (memory usage) s , and a storage area is assigned to the context .

US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (processor usage, I/O access rate) (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more processing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (central processing unit) tracking .
US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (processor usage, I/O access rate) (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more processing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (storage area) tracking .
US20080022073A1
CLAIM 9
. The file server of claim 8 , wherein the input buffer includes plural storage area (memory usage) s , and a storage area is assigned to the context .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more process (Internet resources) ing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit) , memory usage (storage area) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080022073A1
CLAIM 9
. The file server of claim 8 , wherein the input buffer includes plural storage area (memory usage) s , and a storage area is assigned to the context .

US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (processor usage, I/O access rate) (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more processing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit) tracking .
US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (processor usage, I/O access rate) (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more processing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
US20080022073A1
CLAIM 9
. The file server of claim 8 , wherein the input buffer includes plural storage area (memory usage) s , and a storage area is assigned to the context .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more process (Internet resources) ing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (central processing unit) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (central processing unit) , memory usage (storage area) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080022073A1
CLAIM 9
. The file server of claim 8 , wherein the input buffer includes plural storage area (memory usage) s , and a storage area is assigned to the context .

US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (processor usage, I/O access rate) (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more processing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit) tracking .
US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (processor usage, I/O access rate) (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more processing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
US20080022073A1
CLAIM 9
. The file server of claim 8 , wherein the input buffer includes plural storage area (memory usage) s , and a storage area is assigned to the context .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (more process) , or resources included in virtual private networks (VPNs) .
US20080022073A1
CLAIM 10
. A file server comprising : a central processing unit (CPU) running an instruction set including a plurality of instructions to operate a file system that organizes a plurality of files and meta-data pertaining to the files ;
and an offload device coupled to receive file-access requests from a network , the offload device comprising an output buffer coupled to the CPU ;
a memory , the memory allocated by the offload device , the memory storing lookup tables having entries , one or more of the entries comprising a tag storing a first variable-length string ;
processing units for executing functional-level instructions belonging to a functional-level instruction set , the functional-level instructions including instructions operating on operands of variable-length , the processing units comprising a lookup unit ;
and one or more process (Internet resources) ing slices operating in parallel , each processing slice processing a request , the request seeking to access a file , the file being one of the plurality of files , the request including a second variable-length string , the request having a context allocated by the offload device , the context including : an execution buffer that stores a variable-length operand , the variable-length operand representing the second variable-length string ;
a first fixed-length register storing a pointer that indicates a location for the variable-length operand ;
a second fixed-length register storing an operand-length that indicates a length for the variable-length operand or storing a pointer that indicates the end of the variable-length operand ;
wherein a functional-level instruction identifies the variable-length operand stored in the execution buffer by specifying a first register number for the first fixed-length register and a second register number for the second fixed-length register , the lookup unit searches the lookup tables for an entry , and the first variable-length string stored in the tag for the entry matches the variable-length operand .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20080104587A1

Filed: 2006-10-27     Issued: 2008-05-01

Migrating a virtual machine from a first physical machine in response to receiving a command to lower a power mode of the first physical machine

(Original Assignee) Hewlett Packard Development Co LP     (Current Assignee) Hewlett Packard Enterprise Development LP

Daniel J. Magenheimer, Bret A. McKee, Robert D. Gardner, Chris D. Hyser
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines) , comprising : determining a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (lower power) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power (maximum capacity) mode , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based processor usage tracking (power mode) .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power mode (processor usage tracking, memory usage tracking, CPU consumption rate) , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based memory usage tracking (power mode) .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power mode (processor usage tracking, memory usage tracking, CPU consumption rate) , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (lower power) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power (maximum capacity) mode , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (power mode) .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power mode (processor usage tracking, memory usage tracking, CPU consumption rate) , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage tracking (power mode) .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power mode (processor usage tracking, memory usage tracking, CPU consumption rate) , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (virtual machines) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate (power mode) , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (lower power) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power (maximum capacity) mode , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (power mode) .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power mode (processor usage tracking, memory usage tracking, CPU consumption rate) , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage tracking (power mode) .
US20080104587A1
CLAIM 1
. A method comprising : receiving a command to place a first physical machine into a lower power mode (processor usage tracking, memory usage tracking, CPU consumption rate) , wherein the first physical machine has a virtual machine ;
and in response to the received command , performing a procedure to migrate the virtual machine from the first physical machine to a second physical machine .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20080104587A1
CLAIM 6
. The method of claim 4 , wherein sending the at least one indication to the at least one node comprises sending the at least one indication to a placement controller that manages placement of virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) on plural physical machines .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20080022282A1

Filed: 2006-07-20     Issued: 2008-01-24

System and method for evaluating performance of a workload manager

(Original Assignee) Hewlett Packard Development Co LP     (Current Assignee) Hewlett Packard Enterprise Development LP

Ludmila Cherkasova, Jerome Rolia, Clifford A. McCarthy
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit) , memory usage (load demand) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080022282A1
CLAIM 14
. A method comprising : receiving , by a workload manager evaluator , a workload demand (memory usage) ing access to at least one shared resource ;
receiving , by the workload manager evaluator , a user-defined value for at least one desired performance parameter ;
and evaluating , by the workload manager evaluator , performance of at least one workload manager in setting values for at least one control parameter of a scheduler for managing access to the at least one shared resource .

US20080022282A1
CLAIM 15
. The method of claim 14 wherein the at least one shared resource comprises at least one central processing unit (processor usage, I/O access rate) (CPU) , and wherein at least one desired performance parameter comprises at least one of the following : a) a lowerAllocUtil threshold control parameter that triggers a decrease of CPU allocation for the workload , and b) a upperAllocUtil threshold control parameter that triggers an increase of the CPU allocation for the workload ;
and wherein the at least one control parameter of a scheduler comprises at least one of the following : a) a minCPU allocation control parameter that defines a minimum CPU allocation amount for the workload ;
and b) a maxCPU allocation control parameter that defines a maximum CPU allocation amount for the workload .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (central processing unit) tracking .
US20080022282A1
CLAIM 15
. The method of claim 14 wherein the at least one shared resource comprises at least one central processing unit (processor usage, I/O access rate) (CPU) , and wherein at least one desired performance parameter comprises at least one of the following : a) a lowerAllocUtil threshold control parameter that triggers a decrease of CPU allocation for the workload , and b) a upperAllocUtil threshold control parameter that triggers an increase of the CPU allocation for the workload ;
and wherein the at least one control parameter of a scheduler comprises at least one of the following : a) a minCPU allocation control parameter that defines a minimum CPU allocation amount for the workload ;
and b) a maxCPU allocation control parameter that defines a maximum CPU allocation amount for the workload .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (load demand) tracking .
US20080022282A1
CLAIM 14
. A method comprising : receiving , by a workload manager evaluator , a workload demand (memory usage) ing access to at least one shared resource ;
receiving , by the workload manager evaluator , a user-defined value for at least one desired performance parameter ;
and evaluating , by the workload manager evaluator , performance of at least one workload manager in setting values for at least one control parameter of a scheduler for managing access to the at least one shared resource .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit) , memory usage (load demand) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080022282A1
CLAIM 14
. A method comprising : receiving , by a workload manager evaluator , a workload demand (memory usage) ing access to at least one shared resource ;
receiving , by the workload manager evaluator , a user-defined value for at least one desired performance parameter ;
and evaluating , by the workload manager evaluator , performance of at least one workload manager in setting values for at least one control parameter of a scheduler for managing access to the at least one shared resource .

US20080022282A1
CLAIM 15
. The method of claim 14 wherein the at least one shared resource comprises at least one central processing unit (processor usage, I/O access rate) (CPU) , and wherein at least one desired performance parameter comprises at least one of the following : a) a lowerAllocUtil threshold control parameter that triggers a decrease of CPU allocation for the workload , and b) a upperAllocUtil threshold control parameter that triggers an increase of the CPU allocation for the workload ;
and wherein the at least one control parameter of a scheduler comprises at least one of the following : a) a minCPU allocation control parameter that defines a minimum CPU allocation amount for the workload ;
and b) a maxCPU allocation control parameter that defines a maximum CPU allocation amount for the workload .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit) tracking .
US20080022282A1
CLAIM 15
. The method of claim 14 wherein the at least one shared resource comprises at least one central processing unit (processor usage, I/O access rate) (CPU) , and wherein at least one desired performance parameter comprises at least one of the following : a) a lowerAllocUtil threshold control parameter that triggers a decrease of CPU allocation for the workload , and b) a upperAllocUtil threshold control parameter that triggers an increase of the CPU allocation for the workload ;
and wherein the at least one control parameter of a scheduler comprises at least one of the following : a) a minCPU allocation control parameter that defines a minimum CPU allocation amount for the workload ;
and b) a maxCPU allocation control parameter that defines a maximum CPU allocation amount for the workload .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (load demand) tracking .
US20080022282A1
CLAIM 14
. A method comprising : receiving , by a workload manager evaluator , a workload demand (memory usage) ing access to at least one shared resource ;
receiving , by the workload manager evaluator , a user-defined value for at least one desired performance parameter ;
and evaluating , by the workload manager evaluator , performance of at least one workload manager in setting values for at least one control parameter of a scheduler for managing access to the at least one shared resource .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (central processing unit) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (central processing unit) , memory usage (load demand) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20080022282A1
CLAIM 14
. A method comprising : receiving , by a workload manager evaluator , a workload demand (memory usage) ing access to at least one shared resource ;
receiving , by the workload manager evaluator , a user-defined value for at least one desired performance parameter ;
and evaluating , by the workload manager evaluator , performance of at least one workload manager in setting values for at least one control parameter of a scheduler for managing access to the at least one shared resource .

US20080022282A1
CLAIM 15
. The method of claim 14 wherein the at least one shared resource comprises at least one central processing unit (processor usage, I/O access rate) (CPU) , and wherein at least one desired performance parameter comprises at least one of the following : a) a lowerAllocUtil threshold control parameter that triggers a decrease of CPU allocation for the workload , and b) a upperAllocUtil threshold control parameter that triggers an increase of the CPU allocation for the workload ;
and wherein the at least one control parameter of a scheduler comprises at least one of the following : a) a minCPU allocation control parameter that defines a minimum CPU allocation amount for the workload ;
and b) a maxCPU allocation control parameter that defines a maximum CPU allocation amount for the workload .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit) tracking .
US20080022282A1
CLAIM 15
. The method of claim 14 wherein the at least one shared resource comprises at least one central processing unit (processor usage, I/O access rate) (CPU) , and wherein at least one desired performance parameter comprises at least one of the following : a) a lowerAllocUtil threshold control parameter that triggers a decrease of CPU allocation for the workload , and b) a upperAllocUtil threshold control parameter that triggers an increase of the CPU allocation for the workload ;
and wherein the at least one control parameter of a scheduler comprises at least one of the following : a) a minCPU allocation control parameter that defines a minimum CPU allocation amount for the workload ;
and b) a maxCPU allocation control parameter that defines a maximum CPU allocation amount for the workload .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (load demand) tracking .
US20080022282A1
CLAIM 14
. A method comprising : receiving , by a workload manager evaluator , a workload demand (memory usage) ing access to at least one shared resource ;
receiving , by the workload manager evaluator , a user-defined value for at least one desired performance parameter ;
and evaluating , by the workload manager evaluator , performance of at least one workload manager in setting values for at least one control parameter of a scheduler for managing access to the at least one shared resource .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20070204265A1

Filed: 2006-02-28     Issued: 2007-08-30

Migrating a virtual machine that owns a resource such as a hardware device

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Jacob Oshins
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing (second platform, first platform) environment (virtual machines) , comprising : determining a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform to a second platform , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform to a second platform , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform to a second platform , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform to a second platform , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform to a second platform , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing (second platform, first platform) resource manager to : determine a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (second platform, first platform) resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (second platform, first platform) resource manager to use LIRS based processor usage tracking .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (second platform, first platform) resource manager to use LIRS based memory usage tracking .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform to a second platform , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing (second platform, first platform) resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing (second platform, first platform) environment (virtual machines) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (virtual machines) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (second platform, first platform) resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (second platform, first platform) resource manager to use LIRS based processor usage tracking .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (second platform, first platform) resource manager to use LIRS based memory usage tracking .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform to a second platform , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing (second platform, first platform) resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20070204265A1
CLAIM 1
. A computing system comprising : a resource for providing a resource service ;
and a computing device having first and second virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) (VMs) instantiated on the computing device , each VM for hosting an instance of an operating system upon which one or more applications may be instantiated , the first VM initially being communicatively coupled to the resource and the resource being initially assigned to the first VM such that the first VM initially owns the resource and the service provided thereby , the first VM being a software construct on the computing device that can be saved and migrated from a first platform (cloud computing) to a second platform (cloud computing) , the first VM including : a resource stack corresponding to and accessing the resource according to access requests sent by way of such resource stack ;
a first port communicatively coupled to the resource ;
a second port communicatively coupled to a communications medium ;
and a port redirector communicatively coupled to the resource stack , the first port and the second port and forwarding each access request from the resource stack to be queued at one of the first port and the second port , the port redirector forwarding each access request from the resource stack to be queued at the first port until the first VM is directed to be saved or migrated , each access request at the first port being further forwarded in turn to the resource to be acted upon by such resource , the port redirector forwarding each access request from the resource stack to the second port upon the first VM being directed to be saved or migrated and thereafter , each access request at the second port being further forwarded in turn only after the resource has acted upon all access requests queued at the first port and thereafter has been removed from being owned by the first VM , the second VM subsequently being communicatively coupled to the resource and the resource being subsequently assigned to the second VM after the resource is removed from the first VM such that the second VM subsequently owns the resource and the service provided thereby , the second VM as owner of the resource being communicatively coupled to the second port of the first VM by way of the communications medium , each access request at the second port being further forwarded in turn to the second VM by way of the communications medium and then further forwarded in turn to the resource by way of the second VM to be acted upon by such resource , whereby all access requests from the resource stack of the first VM are acted upon by the resource in turn even after the resource is removed from the first VM and assigned to the second VM and the save or migrate can thereafter be completed .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
WO2007087828A1

Filed: 2006-02-05     Issued: 2007-08-09

Method and devices for installing packet filters in a data transmission

(Original Assignee) Telefonaktiebogalet Lm Ericsson (Publ)     

Per Willars, Reiner Ludwig, Hannes Ekström, Henrik Basilier
US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (processing unit) .
WO2007087828A1
CLAIM 14
. A control entity for a communication network with a user equipment (UE1) , wherein an application function of the user equipment is adapted to send a data packet in a data flow , a packet bearer (PB) can be established with the user equipment to transmit the data packet (DP) over the communication network towards a further entity , and wherein the user equipment is adapted to establish different packet bearers , the control entity comprising an input unit adapted to receive the flow with the data packet or information related to the flow , a processing unit (processor usage tracking) (PUC) with an identification function (IF) adapted to identify the flow , with a policy function (PF) adapted to determine the packet bearer for association with said flow from the different packet bearers , and with a determination function (DRI) for determining a routing level identification of the further entity , and an output unit for instructing the user equipment to install a packet filter based on the routing level identification , wherein the packet filter associates data packets comprising the routing level identification of the further entity with the determined packet bearer .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud (associated item) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2007087828A1
CLAIM 7
. The method according to any preceding claim , wherein the different packet bearers differ in at least one associated item (public cloud) from a group comprising a quality of service , a charging tariff and an access point to which the packet is forwarded .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (processing unit) .
WO2007087828A1
CLAIM 14
. A control entity for a communication network with a user equipment (UE1) , wherein an application function of the user equipment is adapted to send a data packet in a data flow , a packet bearer (PB) can be established with the user equipment to transmit the data packet (DP) over the communication network towards a further entity , and wherein the user equipment is adapted to establish different packet bearers , the control entity comprising an input unit adapted to receive the flow with the data packet or information related to the flow , a processing unit (processor usage tracking) (PUC) with an identification function (IF) adapted to identify the flow , with a policy function (PF) adapted to determine the packet bearer for association with said flow from the different packet bearers , and with a determination function (DRI) for determining a routing level identification of the further entity , and an output unit for instructing the user equipment to install a packet filter based on the routing level identification , wherein the packet filter associates data packets comprising the routing level identification of the further entity with the determined packet bearer .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud (associated item) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2007087828A1
CLAIM 7
. The method according to any preceding claim , wherein the different packet bearers differ in at least one associated item (public cloud) from a group comprising a quality of service , a charging tariff and an access point to which the packet is forwarded .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (processing unit) .
WO2007087828A1
CLAIM 14
. A control entity for a communication network with a user equipment (UE1) , wherein an application function of the user equipment is adapted to send a data packet in a data flow , a packet bearer (PB) can be established with the user equipment to transmit the data packet (DP) over the communication network towards a further entity , and wherein the user equipment is adapted to establish different packet bearers , the control entity comprising an input unit adapted to receive the flow with the data packet or information related to the flow , a processing unit (processor usage tracking) (PUC) with an identification function (IF) adapted to identify the flow , with a policy function (PF) adapted to determine the packet bearer for association with said flow from the different packet bearers , and with a determination function (DRI) for determining a routing level identification of the further entity , and an output unit for instructing the user equipment to install a packet filter based on the routing level identification , wherein the packet filter associates data packets comprising the routing level identification of the further entity with the determined packet bearer .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud (associated item) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
WO2007087828A1
CLAIM 7
. The method according to any preceding claim , wherein the different packet bearers differ in at least one associated item (public cloud) from a group comprising a quality of service , a charging tariff and an access point to which the packet is forwarded .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20050120160A1

Filed: 2004-10-25     Issued: 2005-06-02

System and method for managing virtual servers

(Original Assignee) Virtual Iron Software Inc; Katana Technology Inc     (Current Assignee) Oracle International Corp ; Virtual Iron Software Inc

Jerry Plouffe, Scott Davis, Alexander Vasilevsky, Benjamin Thomas, Steven Noyes, Tom Hazel
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate (virtual server) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (d line) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud (processing resource) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20050120160A1
CLAIM 6
. The system as recited in claim 1 , wherein the interface includes at least one of a group comprising : a graphical user interface ;
a command line (maximum capacity) interface ;
and a programmatic interface .

US20050120160A1
CLAIM 7
. In a computer system comprising a plurality of resources and user interface having a display , a method of permitting configuration of the plurality of resources , comprising : displaying a representation of a virtual server (consumption rate, memory consumption rate) on the display ;
and providing for a user to select a resource and associate the resource with the virtual server .

US20050120160A1
CLAIM 25
. The virtual computing system according to claim 21 , wherein the manager is adapted to perform at least one of the group of actions comprising : add a resource to the virtual computing system ;
repair a resource of the virtual computing system ;
remove a resource from the virtual computing system ;
start a processing resource (alternate cloud) of the virtual computing system ;
and stop a processing resource of the virtual computing system .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud (processing resource) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20050120160A1
CLAIM 25
. The virtual computing system according to claim 21 , wherein the manager is adapted to perform at least one of the group of actions comprising : add a resource to the virtual computing system ;
repair a resource of the virtual computing system ;
remove a resource from the virtual computing system ;
start a processing resource (alternate cloud) of the virtual computing system ;
and stop a processing resource of the virtual computing system .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate (virtual server) of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (d line) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (processing resource) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20050120160A1
CLAIM 6
. The system as recited in claim 1 , wherein the interface includes at least one of a group comprising : a graphical user interface ;
a command line (maximum capacity) interface ;
and a programmatic interface .

US20050120160A1
CLAIM 7
. In a computer system comprising a plurality of resources and user interface having a display , a method of permitting configuration of the plurality of resources , comprising : displaying a representation of a virtual server (consumption rate, memory consumption rate) on the display ;
and providing for a user to select a resource and associate the resource with the virtual server .

US20050120160A1
CLAIM 25
. The virtual computing system according to claim 21 , wherein the manager is adapted to perform at least one of the group of actions comprising : add a resource to the virtual computing system ;
repair a resource of the virtual computing system ;
remove a resource from the virtual computing system ;
start a processing resource (alternate cloud) of the virtual computing system ;
and stop a processing resource of the virtual computing system .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud (processing resource) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20050120160A1
CLAIM 25
. The virtual computing system according to claim 21 , wherein the manager is adapted to perform at least one of the group of actions comprising : add a resource to the virtual computing system ;
repair a resource of the virtual computing system ;
remove a resource from the virtual computing system ;
start a processing resource (alternate cloud) of the virtual computing system ;
and stop a processing resource of the virtual computing system .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate (virtual server) of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (d line) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (processing resource) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20050120160A1
CLAIM 6
. The system as recited in claim 1 , wherein the interface includes at least one of a group comprising : a graphical user interface ;
a command line (maximum capacity) interface ;
and a programmatic interface .

US20050120160A1
CLAIM 7
. In a computer system comprising a plurality of resources and user interface having a display , a method of permitting configuration of the plurality of resources , comprising : displaying a representation of a virtual server (consumption rate, memory consumption rate) on the display ;
and providing for a user to select a resource and associate the resource with the virtual server .

US20050120160A1
CLAIM 25
. The virtual computing system according to claim 21 , wherein the manager is adapted to perform at least one of the group of actions comprising : add a resource to the virtual computing system ;
repair a resource of the virtual computing system ;
remove a resource from the virtual computing system ;
start a processing resource (alternate cloud) of the virtual computing system ;
and stop a processing resource of the virtual computing system .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud (processing resource) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US20050120160A1
CLAIM 25
. The virtual computing system according to claim 21 , wherein the manager is adapted to perform at least one of the group of actions comprising : add a resource to the virtual computing system ;
repair a resource of the virtual computing system ;
remove a resource from the virtual computing system ;
start a processing resource (alternate cloud) of the virtual computing system ;
and stop a processing resource of the virtual computing system .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20060069761A1

Filed: 2004-09-14     Issued: 2006-03-30

System and method for load balancing virtual machines in a computer network

(Original Assignee) Dell Products LP     (Current Assignee) Dell Products LP

Sumankumar Singh, Timothy Abels
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines) , comprising : determining a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (memory resource) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20060069761A1
CLAIM 3
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a server whose total resource utilization exceeds a predetermined threshold comprises the step of identifying a server whose use of memory resource (processor usage) s exceeds a predetermined threshold .

US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the first resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based processor usage (memory resource) tracking .
US20060069761A1
CLAIM 3
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a server whose total resource utilization exceeds a predetermined threshold comprises the step of identifying a server whose use of memory resource (processor usage) s exceeds a predetermined threshold .

US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (resource availability) , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US20060069761A1
CLAIM 7
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a target server for the migration of the identified virtual machine comprises the step of identifying , among the servers of the computer network not including the source server , the server having the highest level of resource availability (hybrid cloud) .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (memory resource) , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20060069761A1
CLAIM 3
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a server whose total resource utilization exceeds a predetermined threshold comprises the step of identifying a server whose use of memory resource (processor usage) s exceeds a predetermined threshold .

US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (memory resource) tracking .
US20060069761A1
CLAIM 3
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a server whose total resource utilization exceeds a predetermined threshold comprises the step of identifying a server whose use of memory resource (processor usage) s exceeds a predetermined threshold .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (resource availability) , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US20060069761A1
CLAIM 7
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a target server for the migration of the identified virtual machine comprises the step of identifying , among the servers of the computer network not including the source server , the server having the highest level of resource availability (hybrid cloud) .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (virtual machines) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (memory resource) , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20060069761A1
CLAIM 3
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a server whose total resource utilization exceeds a predetermined threshold comprises the step of identifying a server whose use of memory resource (processor usage) s exceeds a predetermined threshold .

US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (memory resource) tracking .
US20060069761A1
CLAIM 3
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a server whose total resource utilization exceeds a predetermined threshold comprises the step of identifying a server whose use of memory resource (processor usage) s exceeds a predetermined threshold .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (resource availability) , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .

US20060069761A1
CLAIM 7
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a target server for the migration of the identified virtual machine comprises the step of identifying , among the servers of the computer network not including the source server , the server having the highest level of resource availability (hybrid cloud) .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20060069761A1
CLAIM 5
. The method for migrating a virtual machine of claim 1 , wherein the step of identifying a virtual machine within the source server comprises the step of identifying the virtual machine within the source server that has the lowest resource utilization requirements among the virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) of the source server .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US7337233B2

Filed: 2003-12-31     Issued: 2008-02-26

Network system with TCP/IP protocol spoofing

(Original Assignee) Hughes Network Systems LLC     (Current Assignee) JPMorgan Chase Bank NA ; Hughes Network Systems LLC

Douglas M. Dillon
US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (keep track) .
US7337233B2
CLAIM 72
. A system comprising : an ACK spoofing subsystem for performing TCP ACK spoofing on a TCP connection between a first apparatus on a network and a second apparatus on the network , wherein said subsystem is configured to : (1) receive a TCP packet indicating that a new TCP connection is being formed between the first apparatus and the second apparatus ;
(2) initialize , in response to receiving the TCP packet , a data structure in a memory , the data structure being arranged to store data sent on the TCP connection by the first apparatus toward the second apparatus ;
(3) receive data sent on the TCP connection by the first apparatus toward the second apparatus ;
(4) generate a TCP ACK in response to receipt of the data , the TCP ACK being arranged to spoof receipt by the second apparatus of the data ;
(5) store the data in the data structure ;
(6) forward the data toward the second apparatus ;
(7) in response to an acknowledgement for the data not being received within a predetermined amount of time , forward the data stored in the data structure toward the second apparatus to thereby forward the data again ;
(8) delete the data from the data structure in response to receipt of an acknowledgement for the data ;
(9) keep track (processor usage tracking, alternate cloud resources include one) of a highest in-sequence sequence number on the TCP connection ;
and (10) in the case that the second apparatus sends toward the first apparatus a TCP ACK for the data , the TCP ACK containing data , receive the TCP ACK and forward it toward the first apparatus after ensuring that its ACK number is set equal to the number .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one (keep track) or more of resources included in public cloud (to receive data) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7337233B2
CLAIM 30
. A system comprising : a receiving unit that is configured to receive data (public cloud) sent from a source apparatus , the data being addressed at the IP level to a destination apparatus ;
and a TCP ACK generator that is configured to generate a TCP ACK to be sent to the source apparatus in an IP packet addressed to the source apparatus , the TCP ACK being arranged to spoof receipt of the data by the destination apparatus , wherein the destination apparatus receives the data via a communication path comprising a satellite link .

US7337233B2
CLAIM 72
. A system comprising : an ACK spoofing subsystem for performing TCP ACK spoofing on a TCP connection between a first apparatus on a network and a second apparatus on the network , wherein said subsystem is configured to : (1) receive a TCP packet indicating that a new TCP connection is being formed between the first apparatus and the second apparatus ;
(2) initialize , in response to receiving the TCP packet , a data structure in a memory , the data structure being arranged to store data sent on the TCP connection by the first apparatus toward the second apparatus ;
(3) receive data sent on the TCP connection by the first apparatus toward the second apparatus ;
(4) generate a TCP ACK in response to receipt of the data , the TCP ACK being arranged to spoof receipt by the second apparatus of the data ;
(5) store the data in the data structure ;
(6) forward the data toward the second apparatus ;
(7) in response to an acknowledgement for the data not being received within a predetermined amount of time , forward the data stored in the data structure toward the second apparatus to thereby forward the data again ;
(8) delete the data from the data structure in response to receipt of an acknowledgement for the data ;
(9) keep track (processor usage tracking, alternate cloud resources include one) of a highest in-sequence sequence number on the TCP connection ;
and (10) in the case that the second apparatus sends toward the first apparatus a TCP ACK for the data , the TCP ACK containing data , receive the TCP ACK and forward it toward the first apparatus after ensuring that its ACK number is set equal to the number .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (keep track) .
US7337233B2
CLAIM 72
. A system comprising : an ACK spoofing subsystem for performing TCP ACK spoofing on a TCP connection between a first apparatus on a network and a second apparatus on the network , wherein said subsystem is configured to : (1) receive a TCP packet indicating that a new TCP connection is being formed between the first apparatus and the second apparatus ;
(2) initialize , in response to receiving the TCP packet , a data structure in a memory , the data structure being arranged to store data sent on the TCP connection by the first apparatus toward the second apparatus ;
(3) receive data sent on the TCP connection by the first apparatus toward the second apparatus ;
(4) generate a TCP ACK in response to receipt of the data , the TCP ACK being arranged to spoof receipt by the second apparatus of the data ;
(5) store the data in the data structure ;
(6) forward the data toward the second apparatus ;
(7) in response to an acknowledgement for the data not being received within a predetermined amount of time , forward the data stored in the data structure toward the second apparatus to thereby forward the data again ;
(8) delete the data from the data structure in response to receipt of an acknowledgement for the data ;
(9) keep track (processor usage tracking, alternate cloud resources include one) of a highest in-sequence sequence number on the TCP connection ;
and (10) in the case that the second apparatus sends toward the first apparatus a TCP ACK for the data , the TCP ACK containing data , receive the TCP ACK and forward it toward the first apparatus after ensuring that its ACK number is set equal to the number .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one (keep track) or more of resources included in public cloud (to receive data) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7337233B2
CLAIM 30
. A system comprising : a receiving unit that is configured to receive data (public cloud) sent from a source apparatus , the data being addressed at the IP level to a destination apparatus ;
and a TCP ACK generator that is configured to generate a TCP ACK to be sent to the source apparatus in an IP packet addressed to the source apparatus , the TCP ACK being arranged to spoof receipt of the data by the destination apparatus , wherein the destination apparatus receives the data via a communication path comprising a satellite link .

US7337233B2
CLAIM 72
. A system comprising : an ACK spoofing subsystem for performing TCP ACK spoofing on a TCP connection between a first apparatus on a network and a second apparatus on the network , wherein said subsystem is configured to : (1) receive a TCP packet indicating that a new TCP connection is being formed between the first apparatus and the second apparatus ;
(2) initialize , in response to receiving the TCP packet , a data structure in a memory , the data structure being arranged to store data sent on the TCP connection by the first apparatus toward the second apparatus ;
(3) receive data sent on the TCP connection by the first apparatus toward the second apparatus ;
(4) generate a TCP ACK in response to receipt of the data , the TCP ACK being arranged to spoof receipt by the second apparatus of the data ;
(5) store the data in the data structure ;
(6) forward the data toward the second apparatus ;
(7) in response to an acknowledgement for the data not being received within a predetermined amount of time , forward the data stored in the data structure toward the second apparatus to thereby forward the data again ;
(8) delete the data from the data structure in response to receipt of an acknowledgement for the data ;
(9) keep track (processor usage tracking, alternate cloud resources include one) of a highest in-sequence sequence number on the TCP connection ;
and (10) in the case that the second apparatus sends toward the first apparatus a TCP ACK for the data , the TCP ACK containing data , receive the TCP ACK and forward it toward the first apparatus after ensuring that its ACK number is set equal to the number .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (keep track) .
US7337233B2
CLAIM 72
. A system comprising : an ACK spoofing subsystem for performing TCP ACK spoofing on a TCP connection between a first apparatus on a network and a second apparatus on the network , wherein said subsystem is configured to : (1) receive a TCP packet indicating that a new TCP connection is being formed between the first apparatus and the second apparatus ;
(2) initialize , in response to receiving the TCP packet , a data structure in a memory , the data structure being arranged to store data sent on the TCP connection by the first apparatus toward the second apparatus ;
(3) receive data sent on the TCP connection by the first apparatus toward the second apparatus ;
(4) generate a TCP ACK in response to receipt of the data , the TCP ACK being arranged to spoof receipt by the second apparatus of the data ;
(5) store the data in the data structure ;
(6) forward the data toward the second apparatus ;
(7) in response to an acknowledgement for the data not being received within a predetermined amount of time , forward the data stored in the data structure toward the second apparatus to thereby forward the data again ;
(8) delete the data from the data structure in response to receipt of an acknowledgement for the data ;
(9) keep track (processor usage tracking, alternate cloud resources include one) of a highest in-sequence sequence number on the TCP connection ;
and (10) in the case that the second apparatus sends toward the first apparatus a TCP ACK for the data , the TCP ACK containing data , receive the TCP ACK and forward it toward the first apparatus after ensuring that its ACK number is set equal to the number .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one (keep track) or more of resources included in public cloud (to receive data) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7337233B2
CLAIM 30
. A system comprising : a receiving unit that is configured to receive data (public cloud) sent from a source apparatus , the data being addressed at the IP level to a destination apparatus ;
and a TCP ACK generator that is configured to generate a TCP ACK to be sent to the source apparatus in an IP packet addressed to the source apparatus , the TCP ACK being arranged to spoof receipt of the data by the destination apparatus , wherein the destination apparatus receives the data via a communication path comprising a satellite link .

US7337233B2
CLAIM 72
. A system comprising : an ACK spoofing subsystem for performing TCP ACK spoofing on a TCP connection between a first apparatus on a network and a second apparatus on the network , wherein said subsystem is configured to : (1) receive a TCP packet indicating that a new TCP connection is being formed between the first apparatus and the second apparatus ;
(2) initialize , in response to receiving the TCP packet , a data structure in a memory , the data structure being arranged to store data sent on the TCP connection by the first apparatus toward the second apparatus ;
(3) receive data sent on the TCP connection by the first apparatus toward the second apparatus ;
(4) generate a TCP ACK in response to receipt of the data , the TCP ACK being arranged to spoof receipt by the second apparatus of the data ;
(5) store the data in the data structure ;
(6) forward the data toward the second apparatus ;
(7) in response to an acknowledgement for the data not being received within a predetermined amount of time , forward the data stored in the data structure toward the second apparatus to thereby forward the data again ;
(8) delete the data from the data structure in response to receipt of an acknowledgement for the data ;
(9) keep track (processor usage tracking, alternate cloud resources include one) of a highest in-sequence sequence number on the TCP connection ;
and (10) in the case that the second apparatus sends toward the first apparatus a TCP ACK for the data , the TCP ACK containing data , receive the TCP ACK and forward it toward the first apparatus after ensuring that its ACK number is set equal to the number .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20050060590A1

Filed: 2003-09-16     Issued: 2005-03-17

Power-aware workload balancing usig virtual machines

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

David Bradley, Richard Harper, Steven Hunter
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment (virtual machines) , comprising : determining a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource (physical resources) management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (supporting one) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one (maximum capacity) or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US20050060590A1
CLAIM 12
. A processing workload management system comprising : multiple physical resources (first resource) capable of supporting one or more virtual machines ;
and at least one power management component adapted to calculate the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on , ascertain the number of the available resources within said system , determine the relationship between the number of needed resources and the number of available resources ;
and perform steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the first resource (physical resources) management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US20050060590A1
CLAIM 12
. A processing workload management system comprising : multiple physical resources (first resource) capable of supporting one or more virtual machines ;
and at least one power management component adapted to calculate the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on , ascertain the number of the available resources within said system , determine the relationship between the number of needed resources and the number of available resources ;
and perform steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based processor usage tracking .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the LIRS replacement scheme comprises using LIRS based memory usage tracking .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location (hybrid cloud) , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources (virtual machines) using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources (virtual machines) by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment (virtual machines) ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource (physical resources) management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (supporting one) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one (maximum capacity) or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US20050060590A1
CLAIM 12
. A processing workload management system comprising : multiple physical resources (first resource) capable of supporting one or more virtual machines ;
and at least one power management component adapted to calculate the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on , ascertain the number of the available resources within said system , determine the relationship between the number of needed resources and the number of available resources ;
and perform steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location (hybrid cloud) , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment (virtual machines) , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources (virtual machines) , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource (physical resources) management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (supporting one) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one (maximum capacity) or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US20050060590A1
CLAIM 12
. A processing workload management system comprising : multiple physical resources (first resource) capable of supporting one or more virtual machines ;
and at least one power management component adapted to calculate the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on , ascertain the number of the available resources within said system , determine the relationship between the number of needed resources and the number of available resources ;
and perform steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using a low inter-reference recency set (LIRS) replacement scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (virtual machines) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources (virtual machines) , or resources included in virtual private networks (VPNs) .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location (hybrid cloud) , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources (virtual machines) using least recently used (LRU) replacement scheme .
US20050060590A1
CLAIM 1
. A method of managing workload on a system comprising a plurality of resources each capable of supporting one or more virtual machines (cloud resources, cloud computing environment, alternate cloud resources, Internet resources) and at least one shared storage location , comprising the steps of : calculating the number of needed resources required to support the current workload based on the total utilization of the resources currently powered on ;
ascertaining the number of the available resources within said system ;
determining the relationship between the number of needed resources and the number of available resources ;
and performing steps to migrate at least one virtual machine from at least one physical resource to at least one other physical resource based on the relationship .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US7389330B2

Filed: 2003-09-10     Issued: 2008-06-17

System and method for pre-fetching content in a proxy architecture

(Original Assignee) Hughes Network Systems LLC     (Current Assignee) DIRECTIV GROUP Inc ; JPMorgan Chase Bank NA ; Hughes Network Systems LLC

Douglas Dillon, John Border, Frank Kelly, Daniel Friedman
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (configurable threshold) , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7389330B2
CLAIM 1
. A method for providing a proxy service to retrieve content over a data network from a content server , the method comprising : forwarding a request for the content over the data network towards the content server , wherein a proxy in communication with the content server determines a plurality of objects corresponding to the content based on the request , the proxy generating a list specifying the objects that are to be pre-fetched according to a criterion , wherein number of objects specified in the list is limited by a configurable threshold (I/O access rates) ;
receiving the generated list in response to the request ;
receiving the pre-fetched objects on the list ;
and selectively holding a subsequent request associated with an object specified on the list .

US7389330B2
CLAIM 12
. A computer readable storage medium having instructions for providing a proxy service to retrieve content over a data network from a content server , said instruction , being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the method of claim 1 .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (more processors) tracking .
US7389330B2
CLAIM 12
. A computer readable storage medium having instructions for providing a proxy service to retrieve content over a data network from a content server , said instruction , being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the method of claim 1 .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors) , memory usage , or I/O access rates (configurable threshold) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7389330B2
CLAIM 1
. A method for providing a proxy service to retrieve content over a data network from a content server , the method comprising : forwarding a request for the content over the data network towards the content server , wherein a proxy in communication with the content server determines a plurality of objects corresponding to the content based on the request , the proxy generating a list specifying the objects that are to be pre-fetched according to a criterion , wherein number of objects specified in the list is limited by a configurable threshold (I/O access rates) ;
receiving the generated list in response to the request ;
receiving the pre-fetched objects on the list ;
and selectively holding a subsequent request associated with an object specified on the list .

US7389330B2
CLAIM 12
. A computer readable storage medium having instructions for providing a proxy service to retrieve content over a data network from a content server , said instruction , being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the method of claim 1 .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (more processors) tracking .
US7389330B2
CLAIM 12
. A computer readable storage medium having instructions for providing a proxy service to retrieve content over a data network from a content server , said instruction , being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the method of claim 1 .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (more processors) , memory usage , I/O access rates (configurable threshold) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7389330B2
CLAIM 1
. A method for providing a proxy service to retrieve content over a data network from a content server , the method comprising : forwarding a request for the content over the data network towards the content server , wherein a proxy in communication with the content server determines a plurality of objects corresponding to the content based on the request , the proxy generating a list specifying the objects that are to be pre-fetched according to a criterion , wherein number of objects specified in the list is limited by a configurable threshold (I/O access rates) ;
receiving the generated list in response to the request ;
receiving the pre-fetched objects on the list ;
and selectively holding a subsequent request associated with an object specified on the list .

US7389330B2
CLAIM 12
. A computer readable storage medium having instructions for providing a proxy service to retrieve content over a data network from a content server , said instruction , being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the method of claim 1 .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (more processors) tracking .
US7389330B2
CLAIM 12
. A computer readable storage medium having instructions for providing a proxy service to retrieve content over a data network from a content server , said instruction , being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the method of claim 1 .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US7336967B2

Filed: 2003-07-03     Issued: 2008-02-26

Method and system for providing load-sensitive bandwidth allocation

(Original Assignee) Hughes Network Systems LLC     (Current Assignee) JPMorgan Chase Bank NA ; Hughes Network Systems LLC

Frank Kelly, Gabriel Olariu, Douglas Dillon, Jack Rozmaryn
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7336967B2
CLAIM 6
. A computer-readable medium bearing instructions for managing bandwidth in a data network , the instructions being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the following : allocating capacity on a communication channel for a terminal to transmit data over the communication channel ;
in anticipation of the terminal having to transmit additional data , further allocating additional capacity on the communication channel for the terminal , wherein the anticipatory allocation is determined according to loading of the data network , wherein the communication channel in the allocating step is established by a transmission frame ;
and selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (more processors) tracking .
US7336967B2
CLAIM 6
. A computer-readable medium bearing instructions for managing bandwidth in a data network , the instructions being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the following : allocating capacity on a communication channel for a terminal to transmit data over the communication channel ;
in anticipation of the terminal having to transmit additional data , further allocating additional capacity on the communication channel for the terminal , wherein the anticipatory allocation is determined according to loading of the data network , wherein the communication channel in the allocating step is established by a transmission frame ;
and selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud (to receive data) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7336967B2
CLAIM 17
. A method for managing bandwidth in a bandwidth constrained two-way radio communication system , the method comprising : detecting an active terminal in the communication system ;
allocating bandwidth on a return channel to receive data (public cloud) from the active terminal in response to the detected active terminal ;
and providing subsequent bandwidth allocations on the return channel for anticipated traffic from the terminal based on the loading of the communication system , wherein the bandwidth allocations are adjusted according to one of duration of the subsequent bandwidth allocations and size of the bandwidth allocations , selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (more processors) , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7336967B2
CLAIM 6
. A computer-readable medium bearing instructions for managing bandwidth in a data network , the instructions being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the following : allocating capacity on a communication channel for a terminal to transmit data over the communication channel ;
in anticipation of the terminal having to transmit additional data , further allocating additional capacity on the communication channel for the terminal , wherein the anticipatory allocation is determined according to loading of the data network , wherein the communication channel in the allocating step is established by a transmission frame ;
and selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (more processors) tracking .
US7336967B2
CLAIM 6
. A computer-readable medium bearing instructions for managing bandwidth in a data network , the instructions being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the following : allocating capacity on a communication channel for a terminal to transmit data over the communication channel ;
in anticipation of the terminal having to transmit additional data , further allocating additional capacity on the communication channel for the terminal , wherein the anticipatory allocation is determined according to loading of the data network , wherein the communication channel in the allocating step is established by a transmission frame ;
and selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud (to receive data) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7336967B2
CLAIM 17
. A method for managing bandwidth in a bandwidth constrained two-way radio communication system , the method comprising : detecting an active terminal in the communication system ;
allocating bandwidth on a return channel to receive data (public cloud) from the active terminal in response to the detected active terminal ;
and providing subsequent bandwidth allocations on the return channel for anticipated traffic from the terminal based on the loading of the communication system , wherein the bandwidth allocations are adjusted according to one of duration of the subsequent bandwidth allocations and size of the bandwidth allocations , selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (more processors) , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7336967B2
CLAIM 6
. A computer-readable medium bearing instructions for managing bandwidth in a data network , the instructions being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the following : allocating capacity on a communication channel for a terminal to transmit data over the communication channel ;
in anticipation of the terminal having to transmit additional data , further allocating additional capacity on the communication channel for the terminal , wherein the anticipatory allocation is determined according to loading of the data network , wherein the communication channel in the allocating step is established by a transmission frame ;
and selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (more processors) tracking .
US7336967B2
CLAIM 6
. A computer-readable medium bearing instructions for managing bandwidth in a data network , the instructions being arranged , upon execution , to cause one or more processors (processor usage, processor usage tracking) to perform the following : allocating capacity on a communication channel for a terminal to transmit data over the communication channel ;
in anticipation of the terminal having to transmit additional data , further allocating additional capacity on the communication channel for the terminal , wherein the anticipatory allocation is determined according to loading of the data network , wherein the communication channel in the allocating step is established by a transmission frame ;
and selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud (to receive data) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7336967B2
CLAIM 17
. A method for managing bandwidth in a bandwidth constrained two-way radio communication system , the method comprising : detecting an active terminal in the communication system ;
allocating bandwidth on a return channel to receive data (public cloud) from the active terminal in response to the detected active terminal ;
and providing subsequent bandwidth allocations on the return channel for anticipated traffic from the terminal based on the loading of the communication system , wherein the bandwidth allocations are adjusted according to one of duration of the subsequent bandwidth allocations and size of the bandwidth allocations , selectively adjusting , based on the loading , burst size per frame corresponding to the anticipatory allocation .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20040062246A1

Filed: 2003-06-19     Issued: 2004-04-01

High performance network interface

(Original Assignee) Alacritech Inc     (Current Assignee) Alacritech Inc

Laurence Boucher, Stephen Blightman, Peter Craft, David Higgen, Clive Philbrick, Daryl Starr
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040062246A1
CLAIM 18
. The method of claim 1 , wherein said first packet is determined to conform to said pre-selected protocol , said transferring comprising : storing a data portion of said first packet in a re-assembly storage area (memory usage) , wherein said re-assembly storage area is configured to only store data portions of packets in said first communication flow ;
and storing one or more headers from said header portion in a header storage area .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage tracking (multiple processors) .
US20040062246A1
CLAIM 38
. The method of claim 33 , wherein the host computer system comprises multiple processors (processor usage tracking) , further comprising identifying a first processor in the host computer system to process said packet in accordance with said pre-selected protocol .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based memory usage (storage area) tracking .
US20040062246A1
CLAIM 18
. The method of claim 1 , wherein said first packet is determined to conform to said pre-selected protocol , said transferring comprising : storing a data portion of said first packet in a re-assembly storage area (memory usage) , wherein said re-assembly storage area is configured to only store data portions of packets in said first communication flow ;
and storing one or more headers from said header portion in a header storage area .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage (storage area) , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040062246A1
CLAIM 18
. The method of claim 1 , wherein said first packet is determined to conform to said pre-selected protocol , said transferring comprising : storing a data portion of said first packet in a re-assembly storage area (memory usage) , wherein said re-assembly storage area is configured to only store data portions of packets in said first communication flow ;
and storing one or more headers from said header portion in a header storage area .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (multiple processors) .
US20040062246A1
CLAIM 38
. The method of claim 33 , wherein the host computer system comprises multiple processors (processor usage tracking) , further comprising identifying a first processor in the host computer system to process said packet in accordance with said pre-selected protocol .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
US20040062246A1
CLAIM 18
. The method of claim 1 , wherein said first packet is determined to conform to said pre-selected protocol , said transferring comprising : storing a data portion of said first packet in a re-assembly storage area (memory usage) , wherein said re-assembly storage area is configured to only store data portions of packets in said first communication flow ;
and storing one or more headers from said header portion in a header storage area .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage (storage area) , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040062246A1
CLAIM 18
. The method of claim 1 , wherein said first packet is determined to conform to said pre-selected protocol , said transferring comprising : storing a data portion of said first packet in a re-assembly storage area (memory usage) , wherein said re-assembly storage area is configured to only store data portions of packets in said first communication flow ;
and storing one or more headers from said header portion in a header storage area .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage tracking (multiple processors) .
US20040062246A1
CLAIM 38
. The method of claim 33 , wherein the host computer system comprises multiple processors (processor usage tracking) , further comprising identifying a first processor in the host computer system to process said packet in accordance with said pre-selected protocol .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
US20040062246A1
CLAIM 18
. The method of claim 1 , wherein said first packet is determined to conform to said pre-selected protocol , said transferring comprising : storing a data portion of said first packet in a re-assembly storage area (memory usage) , wherein said re-assembly storage area is configured to only store data portions of packets in said first communication flow ;
and storing one or more headers from said header portion in a header storage area .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20040010612A1

Filed: 2003-06-10     Issued: 2004-01-15

High performance IP processor using RDMA

(Original Assignee) Pandya Ashish A.     

Ashish Pandya
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (internal memory) , memory usage (storage area) , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme (chip memory) based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (output ports, input ports, game server) , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (d line) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage server , or game server (I/O access rate, I/O access rates) or a combination of any of the foregoing .

US20040010612A1
CLAIM 18
. The combination of claim 17 wherein said processor provides IP network storage capability for said storage system to operate in an IP based storage area (memory usage) network .

US20040010612A1
CLAIM 48
. A multi-port hardware processor of a predetermined speed providing remote direct memory access capability for enabling data transfer using TCP , SCTP or UDP or a combination of any of the foregoing over IP networks , said processor coupled to multiple input and output ports (I/O access rate, I/O access rates) each having slower speed line (maximum capacity) rates than said predetermined speed , the sum of said slower line speeds being less than or equal to said predetermined speed .

US20040010612A1
CLAIM 103
. For use in a hardware implemented IP network application processor having remote direct memory access capability and including an input queue and queue controller for accepting incoming data packets including new commands from multiple input ports (I/O access rate, I/O access rates) and queuing them on an input packet queue for scheduling and further processing , the process comprising : accepting incoming data packets from one or more input ports and queuing them on an input packet queue ;
and de-queuing said packets for scheduling and further packet processing .

US20040010612A1
CLAIM 212
. The combination of claim 207 wherein the session cache and memory complex includes a cache array to hold session entries and pointers to session fields held off-chip , comprising a tag array includes tag address fields used to match a hash index or a session entry look-up index address to find an associated cache entry from a memory array ;
tag match logic to compare the index address matching with the tag address fields ;
memory banks to hold the session entry fields including pointers to fields held in off-chip memory (first resource management scheme) ;
memory row and column decoders to decode the address of the session entry to be retrieved or stored ;
data input/output ports and buffers to hold session entry data to be written or read from the session memory arrays ;
a session look-up engine to search session entries based on a session index to read or write said entries and/or their specific fields ;
and an external memory controller to store and retrieve session entries and/or other IP packet processor data to/from off chip memories .

US20040010612A1
CLAIM 233
. A hardware processor capable of executing a transport layer RDMA protocol on an IP Network for Internet Protocol data transfers , and including at least one packet processor having an internal memory (processor usage) containing as database entries , frequently or recently used IP session information for processing said data transfers .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource management scheme (chip memory) comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (attached storage) .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage (replacement scheme) server , or game server or a combination of any of the foregoing .

US20040010612A1
CLAIM 212
. The combination of claim 207 wherein the session cache and memory complex includes a cache array to hold session entries and pointers to session fields held off-chip , comprising a tag array includes tag address fields used to match a hash index or a session entry look-up index address to find an associated cache entry from a memory array ;
tag match logic to compare the index address matching with the tag address fields ;
memory banks to hold the session entry fields including pointers to fields held in off-chip memory (first resource management scheme) ;
memory row and column decoders to decode the address of the session entry to be retrieved or stored ;
data input/output ports and buffers to hold session entry data to be written or read from the session memory arrays ;
a session look-up engine to search session entries based on a session index to read or write said entries and/or their specific fields ;
and an external memory controller to store and retrieve session entries and/or other IP packet processor data to/from off chip memories .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme (attached storage) comprises using LIRS based processor usage (internal memory) tracking .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage (replacement scheme) server , or game server or a combination of any of the foregoing .

US20040010612A1
CLAIM 233
. A hardware processor capable of executing a transport layer RDMA protocol on an IP Network for Internet Protocol data transfers , and including at least one packet processor having an internal memory (processor usage) containing as database entries , frequently or recently used IP session information for processing said data transfers .

US9635134B2
CLAIM 4
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme (attached storage) comprises using LIRS based memory usage (storage area) tracking .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage (replacement scheme) server , or game server or a combination of any of the foregoing .

US20040010612A1
CLAIM 18
. The combination of claim 17 wherein said processor provides IP network storage capability for said storage system to operate in an IP based storage area (memory usage) network .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud (other network, layer remote) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network, layer remote) , Internet resources , or resources included in virtual private networks (VPNs) .
US20040010612A1
CLAIM 54
. A hardware processor providing a transport layer remote (hybrid cloud, public cloud) direct memory access (RDMA) capability , said processor for enabling data transfer over a network using TCP over IP in one or more session connections , said processor including a TCP/IP stack , said stack including an interface to upper layer functions to transport data traffic , said stack providing at least one of the functions of : a . sending and receiving data , including upper layer data ;
b . establishing transport sessions and session teardown functions ;
c . executing error handling functions ;
d . executing time-outs ;
e . executing retransmissions ;
f . executing segmenting and sequencing operations ;
g . maintaining protocol information regarding said active transport sessions ;
h . maintaining TCP/IP state information for each of said one or more session connections ;
i . fragmenting and defragmenting data packets ;
j . routing and forwarding data and control information ;
k . sending to and receiving from a peer , memory regions reserved for RDMA ;
l . recording said memory regions reserved for RDMA in an RDMA database and maintaining said database ;
m . executing operations provided by RDMA capability ;
n . executing security management functions ;
o . executing policy management and enforcement functions ;
p . executing virtualization functions ;
q . communicating errors ;
r . processing Layer 2 media access functions to receive and transmit data packets , validate the packets , handle errors , communicate errors and other Layer 2 functions ;
s . processing physical layer interface functions ;
t . executing TCP/IP checksum generation and verification functions ;
u . processing Out of Order packet handling ;
v . CRC calculation functions ;
w . processing Direct Data Placement/Transfer ;
x . Upper Layer Framing functions ;
y . processing functions and interface to socket API' ;
s ;
z . forming packet headers for TCP/IP for transmitted data and extraction of payload from received packets ;
and aa . processing header formation and payload extraction for Layer 2 protocols of data to be transmitted and received data packets ;
respectively .

US20040010612A1
CLAIM 211
. The combination of claim 207 wherein the session database cache includes the session information and fields from TCP/IP sessions , IP Storage sessions , RDMA sessions or other network (hybrid cloud, public cloud) oriented connections between the network source and destination systems .

US9635134B2
CLAIM 6
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the second resource management scheme comprises prioritizing the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (attached storage) .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage (replacement scheme) server , or game server or a combination of any of the foregoing .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (internal memory) , memory usage (storage area) , or I/O access rates (output ports, input ports, game server) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme (chip memory) based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (d line) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage server , or game server (I/O access rate, I/O access rates) or a combination of any of the foregoing .

US20040010612A1
CLAIM 18
. The combination of claim 17 wherein said processor provides IP network storage capability for said storage system to operate in an IP based storage area (memory usage) network .

US20040010612A1
CLAIM 48
. A multi-port hardware processor of a predetermined speed providing remote direct memory access capability for enabling data transfer using TCP , SCTP or UDP or a combination of any of the foregoing over IP networks , said processor coupled to multiple input and output ports (I/O access rate, I/O access rates) each having slower speed line (maximum capacity) rates than said predetermined speed , the sum of said slower line speeds being less than or equal to said predetermined speed .

US20040010612A1
CLAIM 103
. For use in a hardware implemented IP network application processor having remote direct memory access capability and including an input queue and queue controller for accepting incoming data packets including new commands from multiple input ports (I/O access rate, I/O access rates) and queuing them on an input packet queue for scheduling and further processing , the process comprising : accepting incoming data packets from one or more input ports and queuing them on an input packet queue ;
and de-queuing said packets for scheduling and further packet processing .

US20040010612A1
CLAIM 212
. The combination of claim 207 wherein the session cache and memory complex includes a cache array to hold session entries and pointers to session fields held off-chip , comprising a tag array includes tag address fields used to match a hash index or a session entry look-up index address to find an associated cache entry from a memory array ;
tag match logic to compare the index address matching with the tag address fields ;
memory banks to hold the session entry fields including pointers to fields held in off-chip memory (first resource management scheme) ;
memory row and column decoders to decode the address of the session entry to be retrieved or stored ;
data input/output ports and buffers to hold session entry data to be written or read from the session memory arrays ;
a session look-up engine to search session entries based on a session index to read or write said entries and/or their specific fields ;
and an external memory controller to store and retrieve session entries and/or other IP packet processor data to/from off chip memories .

US20040010612A1
CLAIM 233
. A hardware processor capable of executing a transport layer RDMA protocol on an IP Network for Internet Protocol data transfers , and including at least one packet processor having an internal memory (processor usage) containing as database entries , frequently or recently used IP session information for processing said data transfers .

US9635134B2
CLAIM 8
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (attached storage) .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage (replacement scheme) server , or game server or a combination of any of the foregoing .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (internal memory) tracking .
US20040010612A1
CLAIM 233
. A hardware processor capable of executing a transport layer RDMA protocol on an IP Network for Internet Protocol data transfers , and including at least one packet processor having an internal memory (processor usage) containing as database entries , frequently or recently used IP session information for processing said data transfers .

US9635134B2
CLAIM 10
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
US20040010612A1
CLAIM 18
. The combination of claim 17 wherein said processor provides IP network storage capability for said storage system to operate in an IP based storage area (memory usage) network .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud (other network, layer remote) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network, layer remote) , Internet resources , or resources included in virtual private networks (VPNs) .
US20040010612A1
CLAIM 54
. A hardware processor providing a transport layer remote (hybrid cloud, public cloud) direct memory access (RDMA) capability , said processor for enabling data transfer over a network using TCP over IP in one or more session connections , said processor including a TCP/IP stack , said stack including an interface to upper layer functions to transport data traffic , said stack providing at least one of the functions of : a . sending and receiving data , including upper layer data ;
b . establishing transport sessions and session teardown functions ;
c . executing error handling functions ;
d . executing time-outs ;
e . executing retransmissions ;
f . executing segmenting and sequencing operations ;
g . maintaining protocol information regarding said active transport sessions ;
h . maintaining TCP/IP state information for each of said one or more session connections ;
i . fragmenting and defragmenting data packets ;
j . routing and forwarding data and control information ;
k . sending to and receiving from a peer , memory regions reserved for RDMA ;
l . recording said memory regions reserved for RDMA in an RDMA database and maintaining said database ;
m . executing operations provided by RDMA capability ;
n . executing security management functions ;
o . executing policy management and enforcement functions ;
p . executing virtualization functions ;
q . communicating errors ;
r . processing Layer 2 media access functions to receive and transmit data packets , validate the packets , handle errors , communicate errors and other Layer 2 functions ;
s . processing physical layer interface functions ;
t . executing TCP/IP checksum generation and verification functions ;
u . processing Out of Order packet handling ;
v . CRC calculation functions ;
w . processing Direct Data Placement/Transfer ;
x . Upper Layer Framing functions ;
y . processing functions and interface to socket API' ;
s ;
z . forming packet headers for TCP/IP for transmitted data and extraction of payload from received packets ;
and aa . processing header formation and payload extraction for Layer 2 protocols of data to be transmitted and received data packets ;
respectively .

US20040010612A1
CLAIM 211
. The combination of claim 207 wherein the session database cache includes the session information and fields from TCP/IP sessions , IP Storage sessions , RDMA sessions or other network (hybrid cloud, public cloud) oriented connections between the network source and destination systems .

US9635134B2
CLAIM 12
. The machine-readable non-transitory medium of claim 7 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (attached storage) .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage (replacement scheme) server , or game server or a combination of any of the foregoing .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate (programmable packet) , a memory consumption rate (output data) , an I/O access rate (output ports, input ports, game server) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme (chip memory) based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (internal memory) , memory usage (storage area) , I/O access rates (output ports, input ports, game server) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity (d line) for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage server , or game server (I/O access rate, I/O access rates) or a combination of any of the foregoing .

US20040010612A1
CLAIM 18
. The combination of claim 17 wherein said processor provides IP network storage capability for said storage system to operate in an IP based storage area (memory usage) network .

US20040010612A1
CLAIM 48
. A multi-port hardware processor of a predetermined speed providing remote direct memory access capability for enabling data transfer using TCP , SCTP or UDP or a combination of any of the foregoing over IP networks , said processor coupled to multiple input and output ports (I/O access rate, I/O access rates) each having slower speed line (maximum capacity) rates than said predetermined speed , the sum of said slower line speeds being less than or equal to said predetermined speed .

US20040010612A1
CLAIM 69
. A hardware implemented IP network application processor implementing remote direct memory access (RDMA) capability for providing TCP/IP operations in sessions on information packets from or to an initiator and providing information packets to or from a target , comprising the combination of : a . data processing resources comprising at least one programmable packet (CPU consumption rate) processor for processing said packets ;
b . an RDMA mechanism capable of providing remote direct memory access function between said initiator and said target ;
c . a TCP/IP session cache and memory controller for keeping track of the progress of , and memory useful in , said operations on said packets ;
d . a host interface controller capable of controlling an interface to a host computer in an initiator or target computer system or a fabric interface controller capable of controlling an interface to a fabric ;
and e . a media independent interface capable of controlling an interface to the network media in an initiator or target .

US20040010612A1
CLAIM 103
. For use in a hardware implemented IP network application processor having remote direct memory access capability and including an input queue and queue controller for accepting incoming data packets including new commands from multiple input ports (I/O access rate, I/O access rates) and queuing them on an input packet queue for scheduling and further processing , the process comprising : accepting incoming data packets from one or more input ports and queuing them on an input packet queue ;
and de-queuing said packets for scheduling and further packet processing .

US20040010612A1
CLAIM 180
. A storage flow and RDMA controller for controlling the flow of storage or non-storage data or commands , or a combination thereof , which may or may not require RDMA , for scheduling said commands and data to a scheduler or host processor or control plane processor or additional data processor resources including one or more packet processors , in a hardware IP processor , wherein (1) data or commands are incoming to said IP processor over an IP network , or (2) data or commands are incoming from the host interface from the host processor , said storage flow and RDMA controller comprising at least one of : a . a command scheduler , state controller and sequencer for retrieving commands from a one or more command queues and sending the said command to the control plane processor or the scheduler or one or more of said packet processors for further processing , and managing execution of the said commands by these resources ;
b . a new commands input queue for receiving new commands from a host processor ;
c . an active command queue for holding commands that are being processed , including newly scheduled commands processed from the said new commands queue ;
d . an output completion queue for transmitting the status and the completed commands or their ID to the host processor for the host to take necessary actions , including updating statistics related to the command and/or the connection , any error handling , releasing of any data buffers based on the command under execution ;
e . an output new requests queue for transmitting to the host processor and the drivers running on the host , incoming commands from the packets received by the said IP packet processor for the host to take appropriate actions which may include allocating appropriate buffers for receiving incoming data on the connection , or acting on RDMA commands , or error handling commands or any other incoming commands ;
f . a command look-up engine to look-up the state of the commands being executed including look-up of associated memory descriptors or memory descriptor lists , protection look-up to enable the state update of the commands and enabling the flow of the data associated with the commands to or from the host processor through the host interface ;
g . command look-up tables to store the state of active commands as directed by the said command look-up engine or retrieve the stored state as directed by the said command look-up engine ;
h . said host interface enabling the transfer of data and/or commands to or from the host processor ;
i . a host data pre-fetch manager that directs the pre-fetch of the data anticipated to be required based on the commands active inside the said IP processor , to accelerate the data transfer to the appropriate packet processors when required for processing ;
j . an output data (memory consumption rate) queue for transporting the retrieved data and/or commands form the host processor to the said scheduler or the said control plane processor or the said packet processors for further processing by those resources ;
k . output buffers to hold the data received from the host using the host interface for sending them to the appropriate IP processor resources , including the scheduler or the packet processors or the control plane processor or the session cache and memory controller ;
l . an output queue controller that controls the flow of the received host data to the said output buffers and the said output queues working with the host data prefetch manager and/or the command scheduler , state controller and sequencer ;
m . an input data queue for receiving incoming data extracted by the said packet processor or the control plane processor to be directed to the host processor ;
n . an RDMA engine to perform the RDMA tasks required by those incoming or outgoing RDMA commands or RDMA data , comprising means for recording , retrieving and comparing the region ID , protection keys , performing address translation , and retrieving or depositing data from or to the data buffers in the host memory and further comprising means for providing instructions for the next set of actions or processes to be active for the RDMA commands ;
o . an RDMA look-up table that holds RDMA information , including state per connection or command , used by said RDMA engine to process RDMA commands ;
p . a Target/Initiator Table which is used to record target/initiator information for said data or commands , including IP address , the port to use to reach said IP address and connection or connections to the target/initiator used by the said command scheduler , state controller and sequencer ;
or q . a combination of any of the foregoing .

US20040010612A1
CLAIM 212
. The combination of claim 207 wherein the session cache and memory complex includes a cache array to hold session entries and pointers to session fields held off-chip , comprising a tag array includes tag address fields used to match a hash index or a session entry look-up index address to find an associated cache entry from a memory array ;
tag match logic to compare the index address matching with the tag address fields ;
memory banks to hold the session entry fields including pointers to fields held in off-chip memory (first resource management scheme) ;
memory row and column decoders to decode the address of the session entry to be retrieved or stored ;
data input/output ports and buffers to hold session entry data to be written or read from the session memory arrays ;
a session look-up engine to search session entries based on a session index to read or write said entries and/or their specific fields ;
and an external memory controller to store and retrieve session entries and/or other IP packet processor data to/from off chip memories .

US20040010612A1
CLAIM 233
. A hardware processor capable of executing a transport layer RDMA protocol on an IP Network for Internet Protocol data transfers , and including at least one packet processor having an internal memory (processor usage) containing as database entries , frequently or recently used IP session information for processing said data transfers .

US9635134B2
CLAIM 14
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme (attached storage) .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage (replacement scheme) server , or game server or a combination of any of the foregoing .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (internal memory) tracking .
US20040010612A1
CLAIM 233
. A hardware processor capable of executing a transport layer RDMA protocol on an IP Network for Internet Protocol data transfers , and including at least one packet processor having an internal memory (processor usage) containing as database entries , frequently or recently used IP session information for processing said data transfers .

US9635134B2
CLAIM 16
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based memory usage (storage area) tracking .
US20040010612A1
CLAIM 18
. The combination of claim 17 wherein said processor provides IP network storage capability for said storage system to operate in an IP based storage area (memory usage) network .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud (other network, layer remote) , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (other network, layer remote) , Internet resources , or resources included in virtual private networks (VPNs) .
US20040010612A1
CLAIM 54
. A hardware processor providing a transport layer remote (hybrid cloud, public cloud) direct memory access (RDMA) capability , said processor for enabling data transfer over a network using TCP over IP in one or more session connections , said processor including a TCP/IP stack , said stack including an interface to upper layer functions to transport data traffic , said stack providing at least one of the functions of : a . sending and receiving data , including upper layer data ;
b . establishing transport sessions and session teardown functions ;
c . executing error handling functions ;
d . executing time-outs ;
e . executing retransmissions ;
f . executing segmenting and sequencing operations ;
g . maintaining protocol information regarding said active transport sessions ;
h . maintaining TCP/IP state information for each of said one or more session connections ;
i . fragmenting and defragmenting data packets ;
j . routing and forwarding data and control information ;
k . sending to and receiving from a peer , memory regions reserved for RDMA ;
l . recording said memory regions reserved for RDMA in an RDMA database and maintaining said database ;
m . executing operations provided by RDMA capability ;
n . executing security management functions ;
o . executing policy management and enforcement functions ;
p . executing virtualization functions ;
q . communicating errors ;
r . processing Layer 2 media access functions to receive and transmit data packets , validate the packets , handle errors , communicate errors and other Layer 2 functions ;
s . processing physical layer interface functions ;
t . executing TCP/IP checksum generation and verification functions ;
u . processing Out of Order packet handling ;
v . CRC calculation functions ;
w . processing Direct Data Placement/Transfer ;
x . Upper Layer Framing functions ;
y . processing functions and interface to socket API' ;
s ;
z . forming packet headers for TCP/IP for transmitted data and extraction of payload from received packets ;
and aa . processing header formation and payload extraction for Layer 2 protocols of data to be transmitted and received data packets ;
respectively .

US20040010612A1
CLAIM 211
. The combination of claim 207 wherein the session database cache includes the session information and fields from TCP/IP sessions , IP Storage sessions , RDMA sessions or other network (hybrid cloud, public cloud) oriented connections between the network source and destination systems .

US9635134B2
CLAIM 18
. The system of claim 13 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to prioritize the one or more VMs for consumption of the cloud resources using least recently used (LRU) replacement scheme (attached storage) .
US20040010612A1
CLAIM 8
. The combination of claim 7 wherein the server is a blade server , thin server , media server , streaming media server , appliance server , Unix server , Linux server , Windows or Windows derivative server , AIX server , clustered server , database server , grid computing server , VOIP server , wireless gateway server , security server , file server , network attached storage (replacement scheme) server , or game server or a combination of any of the foregoing .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US7191318B2

Filed: 2003-04-07     Issued: 2007-03-13

Native copy instruction for file-access processor with copy-rule-based validation

(Original Assignee) Alacritech Inc     (Current Assignee) RPX Corp

Tarun Kumar Tripathy, Millind Mittal, Kaushik L. Popat, Amod Bodas
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (memory resource) , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources (current value) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7191318B2
CLAIM 1
. A processor comprising : an instruction decoder for decoding instructions in a program being executed by the processor , the instructions including a copy instruction ;
a plurality of memory resource (processor usage) s for storing data , including a register file containing registers that store operands operated upon by the instructions , the registers being identified by operand fields in the instructions decoded by the instruction decoder ;
a copy unit , activated by the instruction decoder when the copy instruction is decoded , for performing a copy operation indicated by the copy instruction , the copy operation reading a data block from a source resource in the plurality of memory resources , the source resource specified by the copy instruction , the copy operation writing the data block to a destination resource in the plurality of memory resources , the destination resource specified by the copy instruction , the copy operation parsing the data block to create a series of pointers that correspond to a series of data-items within the data block , wherein at least one of the data-items includes a file name or file handle .

US7191318B2
CLAIM 18
. The computerized method of claim 17 wherein validating a current sub-block comprises : reading a format field in a current rule in the rule table , the format field specifying a current format of the current sub-block ;
determining a current length of the current sub-block , the current length being determined by the current format for fixed-length formats , but the current length being read from a length indicator within the current sub-block when the current format is for a variable-length format ;
reading a portion of the data block in the source resource for a length equal to the current length to read a current value (alternate cloud resources) of the current sub-block ;
reading control bits in the current rule ;
reading a compare value field in the current rule ;
comparing the current value read from the source resource to the compare value field in the current rule when the control bits indicate a compare operation ;
and writing an indicator of the current rule to a condition-code register when the compare operation fails , whereby sub-blocks are validated using rules in the rule table .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (memory resource) tracking .
US7191318B2
CLAIM 1
. A processor comprising : an instruction decoder for decoding instructions in a program being executed by the processor , the instructions including a copy instruction ;
a plurality of memory resource (processor usage) s for storing data , including a register file containing registers that store operands operated upon by the instructions , the registers being identified by operand fields in the instructions decoded by the instruction decoder ;
a copy unit , activated by the instruction decoder when the copy instruction is decoded , for performing a copy operation indicated by the copy instruction , the copy operation reading a data block from a source resource in the plurality of memory resources , the source resource specified by the copy instruction , the copy operation writing the data block to a destination resource in the plurality of memory resources , the destination resource specified by the copy instruction , the copy operation parsing the data block to create a series of pointers that correspond to a series of data-items within the data block , wherein at least one of the data-items includes a file name or file handle .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources (current value) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7191318B2
CLAIM 18
. The computerized method of claim 17 wherein validating a current sub-block comprises : reading a format field in a current rule in the rule table , the format field specifying a current format of the current sub-block ;
determining a current length of the current sub-block , the current length being determined by the current format for fixed-length formats , but the current length being read from a length indicator within the current sub-block when the current format is for a variable-length format ;
reading a portion of the data block in the source resource for a length equal to the current length to read a current value (alternate cloud resources) of the current sub-block ;
reading control bits in the current rule ;
reading a compare value field in the current rule ;
comparing the current value read from the source resource to the compare value field in the current rule when the control bits indicate a compare operation ;
and writing an indicator of the current rule to a condition-code register when the compare operation fails , whereby sub-blocks are validated using rules in the rule table .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (memory resource) , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources (current value) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7191318B2
CLAIM 1
. A processor comprising : an instruction decoder for decoding instructions in a program being executed by the processor , the instructions including a copy instruction ;
a plurality of memory resource (processor usage) s for storing data , including a register file containing registers that store operands operated upon by the instructions , the registers being identified by operand fields in the instructions decoded by the instruction decoder ;
a copy unit , activated by the instruction decoder when the copy instruction is decoded , for performing a copy operation indicated by the copy instruction , the copy operation reading a data block from a source resource in the plurality of memory resources , the source resource specified by the copy instruction , the copy operation writing the data block to a destination resource in the plurality of memory resources , the destination resource specified by the copy instruction , the copy operation parsing the data block to create a series of pointers that correspond to a series of data-items within the data block , wherein at least one of the data-items includes a file name or file handle .

US7191318B2
CLAIM 18
. The computerized method of claim 17 wherein validating a current sub-block comprises : reading a format field in a current rule in the rule table , the format field specifying a current format of the current sub-block ;
determining a current length of the current sub-block , the current length being determined by the current format for fixed-length formats , but the current length being read from a length indicator within the current sub-block when the current format is for a variable-length format ;
reading a portion of the data block in the source resource for a length equal to the current length to read a current value (alternate cloud resources) of the current sub-block ;
reading control bits in the current rule ;
reading a compare value field in the current rule ;
comparing the current value read from the source resource to the compare value field in the current rule when the control bits indicate a compare operation ;
and writing an indicator of the current rule to a condition-code register when the compare operation fails , whereby sub-blocks are validated using rules in the rule table .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (memory resource) tracking .
US7191318B2
CLAIM 1
. A processor comprising : an instruction decoder for decoding instructions in a program being executed by the processor , the instructions including a copy instruction ;
a plurality of memory resource (processor usage) s for storing data , including a register file containing registers that store operands operated upon by the instructions , the registers being identified by operand fields in the instructions decoded by the instruction decoder ;
a copy unit , activated by the instruction decoder when the copy instruction is decoded , for performing a copy operation indicated by the copy instruction , the copy operation reading a data block from a source resource in the plurality of memory resources , the source resource specified by the copy instruction , the copy operation writing the data block to a destination resource in the plurality of memory resources , the destination resource specified by the copy instruction , the copy operation parsing the data block to create a series of pointers that correspond to a series of data-items within the data block , wherein at least one of the data-items includes a file name or file handle .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources (current value) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7191318B2
CLAIM 18
. The computerized method of claim 17 wherein validating a current sub-block comprises : reading a format field in a current rule in the rule table , the format field specifying a current format of the current sub-block ;
determining a current length of the current sub-block , the current length being determined by the current format for fixed-length formats , but the current length being read from a length indicator within the current sub-block when the current format is for a variable-length format ;
reading a portion of the data block in the source resource for a length equal to the current length to read a current value (alternate cloud resources) of the current sub-block ;
reading control bits in the current rule ;
reading a compare value field in the current rule ;
comparing the current value read from the source resource to the compare value field in the current rule when the control bits indicate a compare operation ;
and writing an indicator of the current rule to a condition-code register when the compare operation fails , whereby sub-blocks are validated using rules in the rule table .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (memory resource) , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources (current value) located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7191318B2
CLAIM 1
. A processor comprising : an instruction decoder for decoding instructions in a program being executed by the processor , the instructions including a copy instruction ;
a plurality of memory resource (processor usage) s for storing data , including a register file containing registers that store operands operated upon by the instructions , the registers being identified by operand fields in the instructions decoded by the instruction decoder ;
a copy unit , activated by the instruction decoder when the copy instruction is decoded , for performing a copy operation indicated by the copy instruction , the copy operation reading a data block from a source resource in the plurality of memory resources , the source resource specified by the copy instruction , the copy operation writing the data block to a destination resource in the plurality of memory resources , the destination resource specified by the copy instruction , the copy operation parsing the data block to create a series of pointers that correspond to a series of data-items within the data block , wherein at least one of the data-items includes a file name or file handle .

US7191318B2
CLAIM 18
. The computerized method of claim 17 wherein validating a current sub-block comprises : reading a format field in a current rule in the rule table , the format field specifying a current format of the current sub-block ;
determining a current length of the current sub-block , the current length being determined by the current format for fixed-length formats , but the current length being read from a length indicator within the current sub-block when the current format is for a variable-length format ;
reading a portion of the data block in the source resource for a length equal to the current length to read a current value (alternate cloud resources) of the current sub-block ;
reading control bits in the current rule ;
reading a compare value field in the current rule ;
comparing the current value read from the source resource to the compare value field in the current rule when the control bits indicate a compare operation ;
and writing an indicator of the current rule to a condition-code register when the compare operation fails , whereby sub-blocks are validated using rules in the rule table .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (memory resource) tracking .
US7191318B2
CLAIM 1
. A processor comprising : an instruction decoder for decoding instructions in a program being executed by the processor , the instructions including a copy instruction ;
a plurality of memory resource (processor usage) s for storing data , including a register file containing registers that store operands operated upon by the instructions , the registers being identified by operand fields in the instructions decoded by the instruction decoder ;
a copy unit , activated by the instruction decoder when the copy instruction is decoded , for performing a copy operation indicated by the copy instruction , the copy operation reading a data block from a source resource in the plurality of memory resources , the source resource specified by the copy instruction , the copy operation writing the data block to a destination resource in the plurality of memory resources , the destination resource specified by the copy instruction , the copy operation parsing the data block to create a series of pointers that correspond to a series of data-items within the data block , wherein at least one of the data-items includes a file name or file handle .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources (current value) include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US7191318B2
CLAIM 18
. The computerized method of claim 17 wherein validating a current sub-block comprises : reading a format field in a current rule in the rule table , the format field specifying a current format of the current sub-block ;
determining a current length of the current sub-block , the current length being determined by the current format for fixed-length formats , but the current length being read from a length indicator within the current sub-block when the current format is for a variable-length format ;
reading a portion of the data block in the source resource for a length equal to the current length to read a current value (alternate cloud resources) of the current sub-block ;
reading control bits in the current rule ;
reading a compare value field in the current rule ;
comparing the current value read from the source resource to the compare value field in the current rule when the control bits indicate a compare operation ;
and writing an indicator of the current rule to a condition-code register when the compare operation fails , whereby sub-blocks are validated using rules in the rule table .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
EP1876758A2

Filed: 2003-01-13     Issued: 2008-01-09

Peer-to-Peer method of quality of service (QoS) probing and analysis and infrastructure employing same

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Corp

Qian Zhang, Wenwu Thu, Xin Yan Zhang, Yongqiang Xiong
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size (high quality) based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP1876758A2
CLAIM 22
A method of enabling establishment of a peering session having a high quality (change region size) of service in a network , comprising the steps of : receiving a request for contact information for peers currently available on the network , the request originating from a first peer having IP address information associated therewith ;
analyzing stored data records of peers on the network to identify those peers on the network which could satisfy the request ;
sorting the identified data records of the peers on the network that could satisfy the request by stored quality of service information relative to the IP address information of the first peer to form a peer contact list ;
transmitting the peer contact list to the first peer .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size (high quality) based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP1876758A2
CLAIM 22
A method of enabling establishment of a peering session having a high quality (change region size) of service in a network , comprising the steps of : receiving a request for contact information for peers currently available on the network , the request originating from a first peer having IP address information associated therewith ;
analyzing stored data records of peers on the network to identify those peers on the network which could satisfy the request ;
sorting the identified data records of the peers on the network that could satisfy the request by stored quality of service information relative to the IP address information of the first peer to form a peer contact list ;
transmitting the peer contact list to the first peer .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (game server) , or a change region size (high quality) determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
EP1876758A2
CLAIM 6
The method of claim 1 , further comprising the step of communicating to the connection server a desire to be a game server (I/O access rate) .

EP1876758A2
CLAIM 22
A method of enabling establishment of a peering session having a high quality (change region size) of service in a network , comprising the steps of : receiving a request for contact information for peers currently available on the network , the request originating from a first peer having IP address information associated therewith ;
analyzing stored data records of peers on the network to identify those peers on the network which could satisfy the request ;
sorting the identified data records of the peers on the network that could satisfy the request by stored quality of service information relative to the IP address information of the first peer to form a peer contact list ;
transmitting the peer contact list to the first peer .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US6879526B2

Filed: 2002-10-31     Issued: 2005-04-12

Methods and apparatus for improved memory access

(Original Assignee) Ring Tech Enterprises LLC     (Current Assignee) United Microelectronics Corp

William Thomas Lynch, David James Herbison
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (load data, more set) , memory usage , or input/output (I/O) access rates (transfer data) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme (non-volatile memory, more inputs) based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud (one output) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US6879526B2
CLAIM 2
. The apparatus of claim 1 , comprising a connection circuit , responsive to a set of one or more timing pulses correlated with one or more pulses of the clock signal , to load data (processor usage) from one or more of the outputs of the memory device into a corresponding shift register .

US6879526B2
CLAIM 10
. The apparatus of claim 7 , wherein the timing circuit generates a plurality of sampling pulses with variable timing that transfer data (access rates, I/O access rates, I/O access rate) from the memory devices , and the transferred data is analyzed for its accuracy , and the analysis is used to choose a preferred timing for future sampling pulses .

US6879526B2
CLAIM 11
. The apparatus of claim 6 , wherein the pass pulses are determined based on synchronization adjustments between data transferred from the memory to the one or more set (processor usage) s of shift registers and data outside the apparatus .

US6879526B2
CLAIM 20
. The apparatus of claim 19 further including one or more selectors for selecting among one or more inputs (first resource management scheme) to the selector , and outputting data arriving on the selected inputs such that the data is loaded into one of the sets of shift registers , and wherein data arriving at the inputs includes one or more of data to be written to one or more of the memory devices and test data for testing the apparatus .

US6879526B2
CLAIM 29
. The apparatus of claim 1 , wherein the memory device includes a non-volatile memory (first resource management scheme) device .

US6879526B2
CLAIM 51
. A method , comprising : shifting data in a first set of shift registers interconnected in series from the shift register to a next one of the shift registers in the first set on the basis of a clock signal having a shift frequency ;
shifting data in a second set of shift registers interconnected in series from the shift register to a next one of the shift registers in the second set at the shift frequency ;
loading data from at least one output (alternate cloud) of a memory device into a corresponding shift register in the first set ;
loading data from at least one output of the memory device into a corresponding shift register in the second set ;
shifting the data loaded into the shift register in the first set from the shift register to a next one of the shift registers in the first set according to the shift frequency ;
and shifting the data loaded into the shift register in the second set from the shift register to a next one of the shift registers in the second set according to the shift frequency .

US9635134B2
CLAIM 2
. The method of claim 1 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the first resource management scheme (non-volatile memory, more inputs) comprises prioritizing the one or more VMs for consumption of the cloud resources using a low inter-reference recency set (LIRS) replacement scheme .
US6879526B2
CLAIM 20
. The apparatus of claim 19 further including one or more selectors for selecting among one or more inputs (first resource management scheme) to the selector , and outputting data arriving on the selected inputs such that the data is loaded into one of the sets of shift registers , and wherein data arriving at the inputs includes one or more of data to be written to one or more of the memory devices and test data for testing the apparatus .

US6879526B2
CLAIM 29
. The apparatus of claim 1 , wherein the memory device includes a non-volatile memory (first resource management scheme) device .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (load data, more set) tracking .
US6879526B2
CLAIM 2
. The apparatus of claim 1 , comprising a connection circuit , responsive to a set of one or more timing pulses correlated with one or more pulses of the clock signal , to load data (processor usage) from one or more of the outputs of the memory device into a corresponding shift register .

US6879526B2
CLAIM 11
. The apparatus of claim 6 , wherein the pass pulses are determined based on synchronization adjustments between data transferred from the memory to the one or more set (processor usage) s of shift registers and data outside the apparatus .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud (one output) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US6879526B2
CLAIM 51
. A method , comprising : shifting data in a first set of shift registers interconnected in series from the shift register to a next one of the shift registers in the first set on the basis of a clock signal having a shift frequency ;
shifting data in a second set of shift registers interconnected in series from the shift register to a next one of the shift registers in the second set at the shift frequency ;
loading data from at least one output (alternate cloud) of a memory device into a corresponding shift register in the first set ;
loading data from at least one output of the memory device into a corresponding shift register in the second set ;
shifting the data loaded into the shift register in the first set from the shift register to a next one of the shift registers in the first set according to the shift frequency ;
and shifting the data loaded into the shift register in the second set from the shift register to a next one of the shift registers in the second set according to the shift frequency .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (load data, more set) , memory usage , or I/O access rates (transfer data) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme (non-volatile memory, more inputs) based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (one output) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US6879526B2
CLAIM 2
. The apparatus of claim 1 , comprising a connection circuit , responsive to a set of one or more timing pulses correlated with one or more pulses of the clock signal , to load data (processor usage) from one or more of the outputs of the memory device into a corresponding shift register .

US6879526B2
CLAIM 10
. The apparatus of claim 7 , wherein the timing circuit generates a plurality of sampling pulses with variable timing that transfer data (access rates, I/O access rates, I/O access rate) from the memory devices , and the transferred data is analyzed for its accuracy , and the analysis is used to choose a preferred timing for future sampling pulses .

US6879526B2
CLAIM 11
. The apparatus of claim 6 , wherein the pass pulses are determined based on synchronization adjustments between data transferred from the memory to the one or more set (processor usage) s of shift registers and data outside the apparatus .

US6879526B2
CLAIM 20
. The apparatus of claim 19 further including one or more selectors for selecting among one or more inputs (first resource management scheme) to the selector , and outputting data arriving on the selected inputs such that the data is loaded into one of the sets of shift registers , and wherein data arriving at the inputs includes one or more of data to be written to one or more of the memory devices and test data for testing the apparatus .

US6879526B2
CLAIM 29
. The apparatus of claim 1 , wherein the memory device includes a non-volatile memory (first resource management scheme) device .

US6879526B2
CLAIM 51
. A method , comprising : shifting data in a first set of shift registers interconnected in series from the shift register to a next one of the shift registers in the first set on the basis of a clock signal having a shift frequency ;
shifting data in a second set of shift registers interconnected in series from the shift register to a next one of the shift registers in the second set at the shift frequency ;
loading data from at least one output (alternate cloud) of a memory device into a corresponding shift register in the first set ;
loading data from at least one output of the memory device into a corresponding shift register in the second set ;
shifting the data loaded into the shift register in the first set from the shift register to a next one of the shift registers in the first set according to the shift frequency ;
and shifting the data loaded into the shift register in the second set from the shift register to a next one of the shift registers in the second set according to the shift frequency .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (load data, more set) tracking .
US6879526B2
CLAIM 2
. The apparatus of claim 1 , comprising a connection circuit , responsive to a set of one or more timing pulses correlated with one or more pulses of the clock signal , to load data (processor usage) from one or more of the outputs of the memory device into a corresponding shift register .

US6879526B2
CLAIM 11
. The apparatus of claim 6 , wherein the pass pulses are determined based on synchronization adjustments between data transferred from the memory to the one or more set (processor usage) s of shift registers and data outside the apparatus .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud (one output) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US6879526B2
CLAIM 51
. A method , comprising : shifting data in a first set of shift registers interconnected in series from the shift register to a next one of the shift registers in the first set on the basis of a clock signal having a shift frequency ;
shifting data in a second set of shift registers interconnected in series from the shift register to a next one of the shift registers in the second set at the shift frequency ;
loading data from at least one output (alternate cloud) of a memory device into a corresponding shift register in the first set ;
loading data from at least one output of the memory device into a corresponding shift register in the second set ;
shifting the data loaded into the shift register in the first set from the shift register to a next one of the shift registers in the first set according to the shift frequency ;
and shifting the data loaded into the shift register in the second set from the shift register to a next one of the shift registers in the second set according to the shift frequency .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (transfer data) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme (non-volatile memory, more inputs) based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (load data, more set) , memory usage , I/O access rates (transfer data) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud (one output) resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US6879526B2
CLAIM 2
. The apparatus of claim 1 , comprising a connection circuit , responsive to a set of one or more timing pulses correlated with one or more pulses of the clock signal , to load data (processor usage) from one or more of the outputs of the memory device into a corresponding shift register .

US6879526B2
CLAIM 10
. The apparatus of claim 7 , wherein the timing circuit generates a plurality of sampling pulses with variable timing that transfer data (access rates, I/O access rates, I/O access rate) from the memory devices , and the transferred data is analyzed for its accuracy , and the analysis is used to choose a preferred timing for future sampling pulses .

US6879526B2
CLAIM 11
. The apparatus of claim 6 , wherein the pass pulses are determined based on synchronization adjustments between data transferred from the memory to the one or more set (processor usage) s of shift registers and data outside the apparatus .

US6879526B2
CLAIM 20
. The apparatus of claim 19 further including one or more selectors for selecting among one or more inputs (first resource management scheme) to the selector , and outputting data arriving on the selected inputs such that the data is loaded into one of the sets of shift registers , and wherein data arriving at the inputs includes one or more of data to be written to one or more of the memory devices and test data for testing the apparatus .

US6879526B2
CLAIM 29
. The apparatus of claim 1 , wherein the memory device includes a non-volatile memory (first resource management scheme) device .

US6879526B2
CLAIM 51
. A method , comprising : shifting data in a first set of shift registers interconnected in series from the shift register to a next one of the shift registers in the first set on the basis of a clock signal having a shift frequency ;
shifting data in a second set of shift registers interconnected in series from the shift register to a next one of the shift registers in the second set at the shift frequency ;
loading data from at least one output (alternate cloud) of a memory device into a corresponding shift register in the first set ;
loading data from at least one output of the memory device into a corresponding shift register in the second set ;
shifting the data loaded into the shift register in the first set from the shift register to a next one of the shift registers in the first set according to the shift frequency ;
and shifting the data loaded into the shift register in the second set from the shift register to a next one of the shift registers in the second set according to the shift frequency .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (load data, more set) tracking .
US6879526B2
CLAIM 2
. The apparatus of claim 1 , comprising a connection circuit , responsive to a set of one or more timing pulses correlated with one or more pulses of the clock signal , to load data (processor usage) from one or more of the outputs of the memory device into a corresponding shift register .

US6879526B2
CLAIM 11
. The apparatus of claim 6 , wherein the pass pulses are determined based on synchronization adjustments between data transferred from the memory to the one or more set (processor usage) s of shift registers and data outside the apparatus .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud (one output) resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud , Internet resources , or resources included in virtual private networks (VPNs) .
US6879526B2
CLAIM 51
. A method , comprising : shifting data in a first set of shift registers interconnected in series from the shift register to a next one of the shift registers in the first set on the basis of a clock signal having a shift frequency ;
shifting data in a second set of shift registers interconnected in series from the shift register to a next one of the shift registers in the second set at the shift frequency ;
loading data from at least one output (alternate cloud) of a memory device into a corresponding shift register in the first set ;
loading data from at least one output of the memory device into a corresponding shift register in the second set ;
shifting the data loaded into the shift register in the first set from the shift register to a next one of the shift registers in the first set according to the shift frequency ;
and shifting the data loaded into the shift register in the second set from the shift register to a next one of the shift registers in the second set according to the shift frequency .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20040064590A1

Filed: 2002-09-30     Issued: 2004-04-01

Intelligent network storage interface system

(Original Assignee) Alacritech Inc     (Current Assignee) Alacritech Inc

Daryl Starr, Clive Philbrick, Laurence Boucher
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates (second control) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040064590A1
CLAIM 1
. An apparatus connected to a network , the apparatus comprising : a host computer having a processor and a file system , said host computer executing instructions corresponding to a plurality of communication protocols ;
and an interface device connected to said host computer by a bus and connected to the network by a port , said interface device receiving via said port a first message , said first message including a data portion and a first control portion , said data portion cached on said interface device and not passing across said bus to said host computer , said host computer transmitting a second message via said port , said second message including said data portion and a second control (access rates) portion , said second control portion of said second message passing across said bus to said interface device .

US9635134B2
CLAIM 5
. The method of claim 1 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources , or resources included in virtual private networks (VPNs) .
US20040064590A1
CLAIM 16
. The apparatus of claim 11 , wherein said header is processed to obtain a storage location (hybrid cloud) for said data .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates (second control) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040064590A1
CLAIM 1
. An apparatus connected to a network , the apparatus comprising : a host computer having a processor and a file system , said host computer executing instructions corresponding to a plurality of communication protocols ;
and an interface device connected to said host computer by a bus and connected to the network by a port , said interface device receiving via said port a first message , said first message including a data portion and a first control portion , said data portion cached on said interface device and not passing across said bus to said host computer , said host computer transmitting a second message via said port , said second message including said data portion and a second control (access rates) portion , said second control portion of said second message passing across said bus to said interface device .

US9635134B2
CLAIM 11
. The machine-readable non-transitory medium of claim 7 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources , or resources included in virtual private networks (VPNs) .
US20040064590A1
CLAIM 16
. The apparatus of claim 11 , wherein said header is processed to obtain a storage location (hybrid cloud) for said data .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (second control) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040064590A1
CLAIM 1
. An apparatus connected to a network , the apparatus comprising : a host computer having a processor and a file system , said host computer executing instructions corresponding to a plurality of communication protocols ;
and an interface device connected to said host computer by a bus and connected to the network by a port , said interface device receiving via said port a first message , said first message including a data portion and a first control portion , said data portion cached on said interface device and not passing across said bus to said host computer , said host computer transmitting a second message via said port , said second message including said data portion and a second control (access rates) portion , said second control portion of said second message passing across said bus to said interface device .

US9635134B2
CLAIM 17
. The system of claim 13 , wherein the alternate cloud resources include one or more of resources included in public cloud , resources included in community cloud , resources included in private cloud , resources included in hybrid cloud (storage location) , Internet resources , or resources included in virtual private networks (VPNs) .
US20040064590A1
CLAIM 16
. The apparatus of claim 11 , wherein said header is processed to obtain a storage location (hybrid cloud) for said data .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US20040073703A1

Filed: 2002-09-27     Issued: 2004-04-15

Fast-path apparatus for receiving data corresponding a TCP connection

(Original Assignee) Alacritech Inc     (Current Assignee) Alacritech Inc

Laurence Boucher, Stephen Blightman, Peter Craft, David Higgen, Clive Philbrick, Daryl Starr
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit, hardware logic, transfer data) , memory usage , or input/output (I/O) access rates (central processing unit, hardware logic, transfer data) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040073703A1
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus by a network , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication between an application of the first apparatus and an application of the second apparatus , the device comprising : a communication processing mechanism connected to the first processor and to the network , said communication processing mechanism containing a second processor and instructions to choose , by referencing the context , whether a network message packet is processed by the protocol processing layers or the context is employed to transfer data (processor usage, I/O access rate, access rates, I/O access rates) contained in said packet between the network and the first apparatus memory .

US20040073703A1
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (processor usage, I/O access rate, access rates, I/O access rates) units for processing said header .

US20040073703A1
CLAIM 22
. The device of claim 21 , further comprising a central processing unit (processor usage, I/O access rate, access rates, I/O access rates) (CPU) running a protocol processing stack to generate a Transmission Control Block (TCB) corresponding to said packet .

US9635134B2
CLAIM 3
. The method of claim 2 , wherein prioritizing the one or more VMs for consumption of the cloud resources using the LIRS replacement scheme comprises using LIRS based processor usage (central processing unit, hardware logic, transfer data) tracking .
US20040073703A1
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus by a network , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication between an application of the first apparatus and an application of the second apparatus , the device comprising : a communication processing mechanism connected to the first processor and to the network , said communication processing mechanism containing a second processor and instructions to choose , by referencing the context , whether a network message packet is processed by the protocol processing layers or the context is employed to transfer data (processor usage, I/O access rate, access rates, I/O access rates) contained in said packet between the network and the first apparatus memory .

US20040073703A1
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (processor usage, I/O access rate, access rates, I/O access rates) units for processing said header .

US20040073703A1
CLAIM 22
. The device of claim 21 , further comprising a central processing unit (processor usage, I/O access rate, access rates, I/O access rates) (CPU) running a protocol processing stack to generate a Transmission Control Block (TCB) corresponding to said packet .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions (second instruction, first instruction) that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage (central processing unit, hardware logic, transfer data) , memory usage , or I/O access rates (central processing unit, hardware logic, transfer data) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040073703A1
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus by a network , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication between an application of the first apparatus and an application of the second apparatus , the device comprising : a communication processing mechanism connected to the first processor and to the network , said communication processing mechanism containing a second processor and instructions to choose , by referencing the context , whether a network message packet is processed by the protocol processing layers or the context is employed to transfer data (processor usage, I/O access rate, access rates, I/O access rates) contained in said packet between the network and the first apparatus memory .

US20040073703A1
CLAIM 4
. The device of claim 1 , wherein said instructions include a first instruction (therein instructions) to create a header corresponding to said context and having control information corresponding to several of the protocol processing layers , and said instructions include a second instruction (therein instructions) to prepend said header to said data for transmission of said packet from the first apparatus to the second apparatus .

US20040073703A1
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (processor usage, I/O access rate, access rates, I/O access rates) units for processing said header .

US20040073703A1
CLAIM 22
. The device of claim 21 , further comprising a central processing unit (processor usage, I/O access rate, access rates, I/O access rates) (CPU) running a protocol processing stack to generate a Transmission Control Block (TCB) corresponding to said packet .

US9635134B2
CLAIM 9
. The machine-readable non-transitory medium of claim 8 , wherein the stored instructions , that when executed by one or more processors , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit, hardware logic, transfer data) tracking .
US20040073703A1
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus by a network , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication between an application of the first apparatus and an application of the second apparatus , the device comprising : a communication processing mechanism connected to the first processor and to the network , said communication processing mechanism containing a second processor and instructions to choose , by referencing the context , whether a network message packet is processed by the protocol processing layers or the context is employed to transfer data (processor usage, I/O access rate, access rates, I/O access rates) contained in said packet between the network and the first apparatus memory .

US20040073703A1
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (processor usage, I/O access rate, access rates, I/O access rates) units for processing said header .

US20040073703A1
CLAIM 22
. The device of claim 21 , further comprising a central processing unit (processor usage, I/O access rate, access rates, I/O access rates) (CPU) running a protocol processing stack to generate a Transmission Control Block (TCB) corresponding to said packet .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions (second instruction, first instruction) that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (central processing unit, hardware logic, transfer data) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage (central processing unit, hardware logic, transfer data) , memory usage , I/O access rates (central processing unit, hardware logic, transfer data) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US20040073703A1
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus by a network , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication between an application of the first apparatus and an application of the second apparatus , the device comprising : a communication processing mechanism connected to the first processor and to the network , said communication processing mechanism containing a second processor and instructions to choose , by referencing the context , whether a network message packet is processed by the protocol processing layers or the context is employed to transfer data (processor usage, I/O access rate, access rates, I/O access rates) contained in said packet between the network and the first apparatus memory .

US20040073703A1
CLAIM 4
. The device of claim 1 , wherein said instructions include a first instruction (therein instructions) to create a header corresponding to said context and having control information corresponding to several of the protocol processing layers , and said instructions include a second instruction (therein instructions) to prepend said header to said data for transmission of said packet from the first apparatus to the second apparatus .

US20040073703A1
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (processor usage, I/O access rate, access rates, I/O access rates) units for processing said header .

US20040073703A1
CLAIM 22
. The device of claim 21 , further comprising a central processing unit (processor usage, I/O access rate, access rates, I/O access rates) (CPU) running a protocol processing stack to generate a Transmission Control Block (TCB) corresponding to said packet .

US9635134B2
CLAIM 15
. The system of claim 14 , wherein the stored instructions , that when executed by the processor , further operatively enable the cloud computing resource manager to use LIRS based processor usage (central processing unit, hardware logic, transfer data) tracking .
US20040073703A1
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus by a network , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication between an application of the first apparatus and an application of the second apparatus , the device comprising : a communication processing mechanism connected to the first processor and to the network , said communication processing mechanism containing a second processor and instructions to choose , by referencing the context , whether a network message packet is processed by the protocol processing layers or the context is employed to transfer data (processor usage, I/O access rate, access rates, I/O access rates) contained in said packet between the network and the first apparatus memory .

US20040073703A1
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (processor usage, I/O access rate, access rates, I/O access rates) units for processing said header .

US20040073703A1
CLAIM 22
. The device of claim 21 , further comprising a central processing unit (processor usage, I/O access rate, access rates, I/O access rates) (CPU) running a protocol processing stack to generate a Transmission Control Block (TCB) corresponding to said packet .




US9635134B2

Filed: 2012-07-03     Issued: 2017-04-25

Resource management in a cloud computing environment

(Original Assignee) Empire Technology Development LLC     (Current Assignee) BEIJING ENDLESS TIME AND SPACE TECHNOLOGY Co Ltd ; INVINCIBLE IP LLC

Junwei Cao, Yuxin Wan
US7237036B2

Filed: 2002-09-27     Issued: 2007-06-26

Fast-path apparatus for receiving data corresponding a TCP connection

(Original Assignee) Alacritech Inc     (Current Assignee) Alacritech Inc

Laurence B. Boucher, Stephen E. J. Blightman, Peter K. Craft, David A. Higgen, Clive M. Philbrick, Daryl D. Starr
US9635134B2
CLAIM 1
. A method to manage resources in a cloud computing environment , comprising : determining a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or input/output (I/O) access rates (hardware logic, transfer data) for the one or more virtual machines in the cloud computing environment ;

prioritizing the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determining whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritizing the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrating the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7237036B2
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication , the context including a media access control (MAC) layer address , an Internet Protocol (IP) address and Transmission Control Protocol (TCP) state information , the device comprising : a communication processing mechanism connected to the first processor , said communication processing mechanism containing a second processor running instructions to process a message packet such that the context is employed to transfer data (I/O access rate, access rates, I/O access rates) contained in said packet to the first apparatus memory and the TCP state information is updated by said second processor .

US7237036B2
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (I/O access rate, access rates, I/O access rates) units for processing said header .

US9635134B2
CLAIM 7
. A machine-readable non-transitory medium having stored therein instructions (second instruction, first instruction) that , when executed by one or more processors , operatively enable a cloud computing resource manager to : determine a consumption rate of cloud resources by one or more virtual machines (VMs) , the determining based on monitoring at least one of processor usage , memory usage , or I/O access rates (hardware logic, transfer data) for the one or more virtual machines in a cloud computing environment ;

prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate ;

determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate includes a change in the at least one of processor usage , memory usage , input/output (I/O) access rates , or a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7237036B2
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication , the context including a media access control (MAC) layer address , an Internet Protocol (IP) address and Transmission Control Protocol (TCP) state information , the device comprising : a communication processing mechanism connected to the first processor , said communication processing mechanism containing a second processor running instructions to process a message packet such that the context is employed to transfer data (I/O access rate, access rates, I/O access rates) contained in said packet to the first apparatus memory and the TCP state information is updated by said second processor .

US7237036B2
CLAIM 4
. The device of claim 1 , wherein said instructions include a first instruction (therein instructions) to create a header corresponding to said context and having control information corresponding to several of the protocol processing layers , and said instructions include a second instruction (therein instructions) to prepend said header to second data for transmission of a second packet .

US7237036B2
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (I/O access rate, access rates, I/O access rates) units for processing said header .

US9635134B2
CLAIM 13
. A system to manage migration of resources in a cloud computing environment , comprising : a processor ;

a cloud computing resource manager communicatively coupled to the processor ;

and a machine-readable non-transitory medium having stored therein instructions (second instruction, first instruction) that , when executed by processor , operatively enable the cloud computing resource manager to : monitor a consumption rate of cloud resources , by one or more virtual machines (VMs) , the consumption rate for the one or more virtual machines including at least one of a CPU consumption rate , a memory consumption rate , an I/O access rate (hardware logic, transfer data) , or a change region size determined according to changed regions of a graphical display generated by the one or more VMs , prioritize the one or more VMs for consumption of the cloud resources using a first resource management scheme based , at least in part , on the determined consumption rate , determine whether a change in the consumption rate of the cloud resources exceeds a predetermined threshold , the change in the consumption rate including a change in the at least one of processor usage , memory usage , I/O access rates (hardware logic, transfer data) , and a change region size based on changed regions of a graphical display generated by the one or more VMs ;

prioritize the one or more VMs for consumption of the cloud resources using a second resource management scheme based , at least in part , on a maximum capacity for utilization of allowed cloud resources for the cloud computing environment and whether the determined change in the consumption rate of the cloud resources exceeds the predetermined threshold ;

and migrate the consumption of the cloud resources to alternate cloud resources located outside of the cloud computing environment for at least one of the one or more VMs based , at least in part , on the one or more VMs prioritized for consumption of the cloud resources using the second resource management scheme .
US7237036B2
CLAIM 1
. A device for use with a first apparatus that is connectable to a second apparatus , the first apparatus containing a memory and a first processor operating a stack of protocol processing layers that create a context for communication , the context including a media access control (MAC) layer address , an Internet Protocol (IP) address and Transmission Control Protocol (TCP) state information , the device comprising : a communication processing mechanism connected to the first processor , said communication processing mechanism containing a second processor running instructions to process a message packet such that the context is employed to transfer data (I/O access rate, access rates, I/O access rates) contained in said packet to the first apparatus memory and the TCP state information is updated by said second processor .

US7237036B2
CLAIM 4
. The device of claim 1 , wherein said instructions include a first instruction (therein instructions) to create a header corresponding to said context and having control information corresponding to several of the protocol processing layers , and said instructions include a second instruction (therein instructions) to prepend said header to second data for transmission of a second packet .

US7237036B2
CLAIM 15
. The device of claim 9 , wherein said receive mechanism has a sequence of hardware logic (I/O access rate, access rates, I/O access rates) units for processing said header .