Purpose: Invalidity Analysis


Patent: US8954993B2
Filed: 2013-02-28
Issued: 2015-02-10
Patent Holder: (Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp
Inventor(s): Ezekiel Kruglick

Title: Local message queue processing for co-located workers

Abstract: Technologies are provided for locally processing queue requests from co-located workers. In some examples, information about the usage of remote datacenter queues by co-located workers may be used to determine one or more matched queues. Messages from local workers to a remote datacenter queue classified as a matched queue may be stored locally. Subsequently, local workers that request messages from matched queues may be provided with the locally-stored messages.




Disclaimer: The promise of Apex Standards Pseudo Claim Charting (PCC) [ Request Form ] is not to replace expert opinion but to provide due diligence and transparency prior to high precision charting. PCC conducts aggressive mapping (based on Broadest Reasonable, Ordinary or Customary Interpretation and Multilingual Translation) between a target patent's claim elements and other documents (potential technical standard specification or prior arts in the same or across different jurisdictions), therefore allowing for a top-down, apriori evaluation, with which, stakeholders can assess standard essentiality (potential strengths) or invalidity (potential weaknesses) quickly and effectively before making complex, high-value decisions. PCC is designed to relieve initial burden of proof via an exhaustive listing of contextual semantic mapping as potential building blocks towards a litigation-ready work product. Stakeholders may then use the mapping to modify upon shortlisted PCC or identify other relevant materials in order to formulate strategy and achieve further purposes.

Click on references to view corresponding claim charts.


Non-Patent Literature        WIPO Prior Art        EP Prior Art        US Prior Art        CN Prior Art        JP Prior Art        KR Prior Art       
 
  Independent Claim

GroundReferenceOwner of the ReferenceTitleSemantic MappingBasisAnticipationChallenged Claims
1234567891011121314151617181920212223
1

USENIX Association Proceedings Of The 2006 USENIX Annual Technical Conference. : 29-42 2006

(Liu, 2006)
International Business Machines CorporationHigh Performance VMM-bypass I/O In Virtual Machines second datacenter, datacenter queue virtual machine monitor

datacenter queue request I/O operation

XXXXXXXXXXXXXX
2

US20130014114A1

(Akihito Nagata, 2013)
(Original Assignee) Sony Interactive Entertainment Inc     

(Current Assignee)
Sony Interactive Entertainment Inc
Information processing apparatus and method for carrying out multi-thread processing producer worker, consumer worker storage location, one processor

queue cache write access

datacenter queue, datacenter queue request push module

35 U.S.C. 103(a)

35 U.S.C. 102(e)
describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches assigning the thread priority to the available thread based on a priority of the task distributed to the…

teaches a method of delaying the execution of thread groups…
XXXXXXXXXXXXXXXXXXXX
3

US20130044749A1

(Mark Eisner, 2013)
(Original Assignee) FireStar Software Inc     

(Current Assignee)
FireStar Software Inc
System and method for exchanging information among exchange applications consumer worker including information

first message different gateways

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses wherein a message contains metadata and is processed…

discloses determining from the request a specified template paragraphs…

discloses that patient data may be converted from a proprietary format to a common format and from a common format to a…

discloses a method of asynchronously communicating with a web application comprising receiving one or more messages from…
XXXXXXXXXXXXXXXXX
4

CN102668516A

(邓金波, 2012)
(Original Assignee) Huawei Technologies Co Ltd     

(Current Assignee)
Huawei Technologies Co Ltd
一种云消息服务中实现消息传递的方法和装置 first message 实现消息

datacenter controller, first datacenter location 数据库

queue requests 包含的

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses disclose the priority scheme includes an indication received from the assistant that particular ones of the…

teaches the additional features wherein said access manager is responsive to said sourcedestination policy specified…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

discloses that a rule can be used as a template for other rules in order to create a new but similar rule column…
XXXXXXXX
5

US20120066177A1

(Scott Swanburg, 2012)
(Original Assignee) AT&T Mobility II LLC     

(Current Assignee)
AT&T Mobility II LLC
Systems and Methods for Remote Deletion of Contact Information second datacenter, datacenter queue desktop computer, laptop computer

message request message request

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the status report transmitted from the mobile unit to the user interface unit according to one of SMTP POP…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches a communication system comprising A mobile unit having a processor a memory and a wireless modem for…
XXXXXXXXXXXXXX
6

US20130036427A1

(Han Chen, 2013)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Message queuing with flexible consistency options datacenter controller readable program

first message, first criterion time t

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXX
7

US20120117167A1

(Aran Sadja, 2012)
(Original Assignee) Sony Corp     

(Current Assignee)
Sony Corp
System and method for providing recommendations to a user in a viewing social network datacenter controller readable program, more processor

second VM specific media

datacenter queue more servers

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses a machine configured to of transmitting at least one indicator of an encapsulation of at least one skill…

teaches determining a communications strength between each of the multiple identities associated with the user and…

discloses in one embodiment that a fan of a cricket watching a test match broadcast free to air could anticipate a…

discloses sending of external program information such as background information for certain programs including video…
XXXXXXXXXXXXXX
8

US20120117144A1

(Ludovic Douillet, 2012)
(Original Assignee) Sony Corp     

(Current Assignee)
Sony Corp
System and method for creating a viewing social network datacenter controller readable program

second VM user selection

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses a machine configured to of transmitting at least one indicator of an encapsulation of at least one skill…

teaches determining a communications strength between each of the multiple identities associated with the user and…

discloses in one embodiment that a fan of a cricket watching a test match broadcast free to air could anticipate a…

discloses sending of external program information such as background information for certain programs including video…
X
9

US20110208796A1

(John Reed Riley, 2011)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Using distributed queues in an overlay network datacenter controller more processor

queue cache, queue cache includes one system memory

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the invention substantially as cited above they do not explicitly teach transactions are committed as a group…

teaches replication techniques and addresses the problem of how to select the node that a replica should visit a…

discloses wherein the optimization comprises maintaining minmax values of unique columns in the column partitioned store…

teaches the timer master schedules jobs in addition to the EJB timer jobs paragraph…
XXX
10

US20100185665A1

(Monroe Horn, 2010)
(Original Assignee) SUNSTEIN KANN MURPHY AND TIMBERS LLP     

(Current Assignee)
SUNSTEIN KANN MURPHY AND TIMBERS LLP
Office-Based Notification Messaging System producer worker message recipients

datacenter controller readable program

second datacenter location time interval

first message, first criterion time t

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches the invention substantially as claimed including a method system and article for processing solicited…

teaches it detects if the message C incoming email message is a command message then the user acts upon the command or…

discloses at least one trust category comprising a suspicious message category see…

discloses the claimed subject matter as discussed above in claim…
XXXXXXXXXXXXXXXXX
11

US20110138400A1

(Allan T. Chandler, 2011)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Automated merger of logically associated messages in a message queue second VMs host computing platform

producer worker one processor

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXX
12

US20100010671A1

(Atsushi Miyamoto, 2010)
(Original Assignee) Sony Corp     

(Current Assignee)
Sony Corp
Information processing system, information processing method, robot control system, robot control method, and computer program message request reception information

producer worker respective processes

datacenter queue different computer

first message exchange messages

datacenter controller readable program

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the method and system are implemented on a portable electronic device col…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXXXXXXXXXXXXXXXX
13

EP2449849A1

(Harsh Jahagirdar, 2012)
(Original Assignee) Nokia Oyj     

(Current Assignee)
Nokia Oyj
Resource allocation datacenter queue request media resource

producer worker one processor

35 U.S.C. 103(a)

35 U.S.C. 102(e)
describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches a system and method for exchanging information among exchange applications…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXX
14

US20100325190A1

(John Reed Riley, 2010)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Using distributed queues in an overlay network datacenter controller more processor

queue cache, queue cache includes one system memory

first message, first criterion time t

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the invention substantially as cited above they do not explicitly teach transactions are committed as a group…

teaches replication techniques and addresses the problem of how to select the node that a replica should visit a…

discloses wherein the optimization comprises maintaining minmax values of unique columns in the column partitioned store…

teaches the timer master schedules jobs in addition to the EJB timer jobs paragraph…
XXXXXXXXXXXXXX
15

US20100325219A1

(Clemens F. Vasters, 2010)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Adding configurable messaging functionality to an infrastructure second datacenter location external hardware

datacenter controller more processor

queue cache, queue cache includes one system memory

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches using priority queuing to decide when to dispatch a message to an…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

discloses determining the request is one of a production request or a priority request…
XXX
16

US20090228564A1

(Keith Martin Hamburg, 2009)
(Original Assignee) AOL Inc     

(Current Assignee)
Verizon Media Inc
Electronic mail forwarding service first message email document

second VM user selection

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the approval of the book is based on the votes received online through a wide area network connection from at…

teaches a computerized method for creating a story by multiple collaborators being online users supplying content…

discloses cascading said rst sub lter and at least one remainder sub lter to create at least part of said ensemble lter…

teaches receiving a request data for a review from the first user via the first client…
XXXXXXX
17

US20100161753A1

(Gerhard Dietrich Klassen, 2010)
(Original Assignee) Research in Motion Ltd     

(Current Assignee)
BlackBerry Ltd
Method and communication device for processing data for transmission from the communication device to a second communication device first criterion communication network

message request instant messaging

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches a system for transmitting data as claimed in claim…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a detection component coupled to the processor wherein the detection component comprises a sensor for…

teaches the status report transmitted from the mobile unit to the user interface unit according to one of SMTP POP…
XXXXXXXXX
18

JP2010020650A

(Atsushi Miyamoto, 2010)
(Original Assignee) Sony Corp; ソニー株式会社     情報処理システム及び情報処理方法、ロボットの制御システム及び制御方法、並びコンピュータ・プログラム queue usage detector module, processing module 受信モジュール, 送信モジュール

first message 送信メッセージ, 受信メッセージ

datacenter queue request する手段

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the method and system are implemented on a portable electronic device col…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXXXXXX
19

US20090249357A1

(Anupam Chanda, 2009)
(Original Assignee) VMware Inc     

(Current Assignee)
VMware Inc
Systems and methods for inter process communication based on queues second VM, second VMs virtual machines

first datacenter readable media

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches providing each of said operating systems with access to second input andor output devices of said computer to…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

teaches a system and method for exchanging information among exchange applications…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXX
20

US20080276241A1

(Ratan Bajpai, 2008)
(Original Assignee) Avaya Inc     

(Current Assignee)
Avaya Inc
Distributed priority queue that maintains item locality second server telephone calls

first message, first criterion time t

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXX
21

US20090241118A1

(Krishna K. Lingamneni, 2009)
(Original Assignee) American Express Travel Related Services Co Inc     

(Current Assignee)
Liberty Peak Ventures LLC
System and method for processing interface requests in batch queue requests requesting application

processing module general purpose

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXX
22

US20080235690A1

(Boon Seong Ang, 2008)
(Original Assignee) VMware Inc     

(Current Assignee)
VMware Inc
Maintaining Processing Order While Permitting Parallelism second datacenter, datacenter queue virtual machine monitor

second VM, second VMs virtual machines

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

discloses dequeuing by the server read and write requests from the client computing device…

teaches the database system is an in memory database system database server…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXXXXXXXXX
23

US20090234908A1

(Marc D. Reyhner, 2009)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Data transmission queuing using fault prediction second datacenter, second datacenter location queue management

second server remote computer

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXX
24

EP1939743A2

(Franz Weber, 2008)
(Original Assignee) SAP SE     

(Current Assignee)
SAP SE
Event correlation message request incoming messages

second VM first event

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXX
25

US20080077939A1

(Richard Michael Harran, 2008)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Solution for modifying a queue manager to support smart aliasing which permits extensible software to execute against queued data without application modifications second VM application execution

second server given operation

second datacenter one computer

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXX
26

US20070239838A1

(James Laurel, 2007)
(Original Assignee) Nokia Oyj; Twango Inc     

(Current Assignee)
Nokia Technologies Oy
Methods and systems for digital content sharing message request, datacenter queue request second email, first email

datacenter queue more servers

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches it detects if the message C incoming email message is a command message then the user acts upon the command or…

teaches a communication system comprising A mobile unit having a processor a memory and a wireless modem for…
XXXXXXXXXXXXXX
27

US20080212602A1

(Alphana B. Hobbs, 2008)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Method, system and program product for optimizing communication and processing functions between disparate applications first server second request

first criterion second program, first program

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the smart video display unit performs a configuration check in conjunction with a configuration identification…

discloses using last come first serve logic with a MAC layer…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches wherein the additional software comprises software for continuously monitoring interfaces and internal…
XXXXXXXXX
28

US20080148281A1

(William R. Magro, 2008)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
RDMA (remote direct memory access) data transfer in a virtual environment second datacenter, datacenter queue virtual machine monitor

second server, second VM second virtual machine

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the claimed limitations wherein providing a tenant with user access to the generated data collection…

discloses the claimed computer program product and apparatus for reconciling billing measures to cost factors the…

teaches a routing application stored on and executing from a memory media of the routing engine…

teaches a method of managing memory of a database management system database server applications…
XXXXXXXXXXXXXX
29

US20070165625A1

(Mark Eisner, 2007)
(Original Assignee) FireStar Software Inc     

(Current Assignee)
FireStar Software Inc
System and method for exchanging information among exchange applications queue cache includes one unique message identifier

queue cache read access

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses wherein a message contains metadata and is processed…

discloses determining from the request a specified template paragraphs…

discloses that patient data may be converted from a proprietary format to a common format and from a common format to a…

discloses a method of asynchronously communicating with a web application comprising receiving one or more messages from…
XXX
30

US20070168301A1

(Mark Eisner, 2007)
(Original Assignee) FireStar Software Inc     

(Current Assignee)
FireStar Software Inc
System and method for exchanging information among exchange applications consumer worker including information

first message different gateways

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses wherein a message contains metadata and is processed…

discloses determining from the request a specified template paragraphs…

discloses that patient data may be converted from a proprietary format to a common format and from a common format to a…

discloses a method of asynchronously communicating with a web application comprising receiving one or more messages from…
XXXXXXXXXXXXXXXXX
31

US20070204275A1

(Melanie Alshab, 2007)
(Original Assignee) Rhysome Inc     

(Current Assignee)
Rhysome Inc
Method and system for reliable message delivery first message transmitting step

second datacenter one computer

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches a routing application stored on and executing from a memory media of the routing engine…

teaches a system and method for exchanging information among exchange applications…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXX
32

US20070288931A1

(Gokhan Avkarogullari, 2007)
(Original Assignee) PortalPlayer Inc     

(Current Assignee)
Nvidia Corp
Multi processor and multi thread safe message queue with hardware assistance queue requests exchanging messages

second datacenter, second datacenter location queue management

queue usage detector module turning control

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches sharing objects with programs developed in different languages including C C and…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches wherein said security module selectively purges all of the data in said shared memory APA pages…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…
XXXX
33

US20070174398A1

(Frank Addante, 2007)
(Original Assignee) StrongMail Systems Inc     

(Current Assignee)
Selligent Inc
Systems and methods for communicating logic in e-mail messages processing module processing module

message request web service

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the status report transmitted from the mobile unit to the user interface unit according to one of SMTP POP…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a communication system comprising A mobile unit having a processor a memory and a wireless modem for…
XXXXXXXXXX
34

US20060146991A1

(J. Thompson, 2006)
(Original Assignee) Tervela Inc     

(Current Assignee)
Tervela Inc
Provisioning and management in a message publish/subscribe system first server external authentication

producer worker data message

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses one or more interfaces to one or more communications channels that may include one or more interfaces to user…

discloses a publicationsubscriber environment in which messages flow from a message broker…

discloses that a message broker can receive raw stock trade information such as price and volume from the NYSE and…

discloses A client session s time stamp is updated each time a message transaction containing the session id for the…
XXXXXXXXX
35

US20070156834A1

(Radoslav Nikolov, 2007)
(Original Assignee) SAP SE     

(Current Assignee)
SAP SE
Cursor component for messaging service command channel acknowledging receipt

first message first message

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches a routing application stored on and executing from a memory media of the routing engine…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

describes a plurality of graphics primitives of the first display frame…
XXXXXXX
36

US20060168070A1

(J. Thompson, 2006)
(Original Assignee) Tervela Inc     

(Current Assignee)
Tervela Inc
Hardware-based messaging appliance second VM configuration parameters

message request incoming messages

processing module processing module

producer worker data message

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses one or more interfaces to one or more communications channels that may include one or more interfaces to user…

discloses a publicationsubscriber environment in which messages flow from a message broker…

discloses that a message broker can receive raw stock trade information such as price and volume from the NYSE and…

discloses A client session s time stamp is updated each time a message transaction containing the session id for the…
XXXXXXXXXXXXXXXX
37

US20070094664A1

(Kimming So, 2007)
(Original Assignee) Broadcom Corp     

(Current Assignee)
Avago Technologies General IP Singapore Pte Ltd
Programmable priority for concurrent multi-threaded processors first server second request

queue cache cache line

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches data processing elements having vector registers vector units…

discloses the claimed invention except for where said accessing results in a cache miss wherein said method further…

teaches wherein the processor device is adapted for the sequential processing unit to be blocked from accessing some…

teaches using application specific multimedia DSP and other kinds of coprocessors it does not teach the data…
XXXXX
38

US20060031568A1

(Vadim Eydelman, 2006)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Adaptive flow control protocol second VM specified time limit

datacenter queue request buffering data

second datacenter location data blocks

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses transfer operation to a remote memory in a remote system with other memory buffers in the local system see…

discloses link adaptation is a dynamic selection of modulation and coding schemes based on radio link quality column…

discloses a data transfer between two applications or devices…

teaches when said counter is equal to at least a predetermined value and decrementing said counter by said byte size…
XXXXX
39

US20070168567A1

(William Boyd, 2007)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
System and method for file based I/O directly between an application instance and an I/O adapter producer worker, consumer worker storage location

queue cache, queue cache includes one system memory, I/O request

message request start address

datacenter queue request I/O operation

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches sharing objects with programs developed in different languages including C C and…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches wherein said security module selectively purges all of the data in said shared memory APA pages…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…
XXXXXXXXXXXXXXXXXXXX
40

US20070005572A1

(Travis Schluessler, 2007)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
Architecture and system for host management message request message request

first message first message

command channel second buffer

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches that sensitive data such as patient records are securely transferred between a programmer and a data…

discloses an electronic health care compliance assistance comprising a timer for tracking total time and patient…

teaches a GUI for display within a touch screen display of a handheld device wherein the handheld device is configured…

teaches a medical retrieval method that incorporates the use of codes to identify relevant medical data col…
XXXXXXXXXXXXX
41

US20060230209A1

(Thomas Gregg, 2006)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Event queue structure and method datacenter queue request event handler

second VM first event

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

discloses wherein the triggered QP and the triggeror QP belong to a common logical partition…

teaches a system and method for exchanging information among exchange applications…

teaches the database system is an in memory database system database server…
XXXXX
42

US20060184948A1

(Alan Cox, 2006)
(Original Assignee) Red Hat Inc     

(Current Assignee)
Red Hat Inc
System, method and medium for providing asynchronous input and output with less system calls to and from an operating system first VM operating system kernel

producer worker one processor

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXX
43

US20050125804A1

(Richard Dievendorff, 2005)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Queued component interface passing for results outflow from queued method invocations first message first message

second datacenter one computer

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses JVMs which is a program loaded onto processing device emulate a particular machine or processing device see…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…
XXXXXXX
44

US20050071316A1

(Ilan Caron, 2005)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network first datacenter readable media

consumer worker one location

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXX
45

US7257811B2

(Jennifer A. Hunt, 2007)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
System, method and program to migrate a virtual machine second VM, second VMs virtual machines

first criterion second program, first program

first datacenter readable media

XXXXXXXXX
46

US20050044151A1

(Jianguo Jiang, 2005)
(Original Assignee) Messagesoft Inc     

(Current Assignee)
Messagesoft Inc
Asynchronous mechanism and message pool command channel acknowledging receipt

first message first message

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses the first classification is based upon the destination port associated the packet column…

teaches determining the threshold value based on the latency of the host computer and the network…

teaches buffers that comprise a transmit FIFO and a receive FIFO…

teaches wherein each copy packet is given a priority order and there is provided means for controlling output of the…
XXXXXXX
47

CN1508682A

(A・康杜, 2004)
(Original Assignee) 国际商业机器公司     任务调度的方法、系统和设备 first datacenter, datacenter queue 一个队列

message request 这些请求

second datacenter, second datacenter location 系统内

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXXXXXXXXX
48

JP2004199678A

(Ashish Kundu, 2004)
(Original Assignee) Internatl Business Mach Corp <Ibm>; インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation     タスク・スケジューリングの方法、システム、およびプログラム製品 queue requests の要求

datacenter queue request QoS

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXX
49

US7337214B2

(Michael Douglass, 2008)
(Original Assignee) YHC Corp     

(Current Assignee)
YHC Corp
Caching, clustering and aggregating server first criterion communication network

datacenter queue request storage spaces

second server second server

consumer worker storage units

XXXXXXXXXXXXXXXXX
50

EP1474746A1

(Thomas E. Hamilton, 2004)
(Original Assignee) Proquent Systems Corp     

(Current Assignee)
Proquent Systems Corp
Management of message queues first message first message

second datacenter one computer

XXXXXXX
51

US20040205770A1

(Kai Zhang, 2004)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Duplicate message elimination system for a message broker queue cache includes one unique message identifier

queue requests other time

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…
XXXX
52

US20040117794A1

(Ashish Kundu, 2004)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Method, system and framework for task scheduling first message exchanging information

datacenter queue request scheduling requests

first server load balancing

first criterion second program, first program

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXXXXXXXXXXXX
53

US20040107240A1

(Boris Zabarski, 2004)
(Original Assignee) Conexant Inc     

(Current Assignee)
Conexant Inc ; Brooktree Broadband Holding Inc
Method and system for intertask messaging between multiple processors first VM, second VM multiple processors

first message first message

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXX
54

US20030014551A1

(Kunihito Ishibashi, 2003)
(Original Assignee) Future System Consulting Corp     

(Current Assignee)
Future Architect Inc
Framework system second datacenter, second datacenter location queue management

first message low definition

queue usage more set

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses all subject matter of the claimed invention as discussed above with respect to claims…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches diverting said email message from delivery to the folder…

discloses a similar method of providing electronic group card in which when a signerparticipant signs the card heshe is…
XXXXXXXXXX
55

US20030055668A1

(Amitabh Saran, 2003)
(Original Assignee) TriVium Systems Inc     

(Current Assignee)
TriVium Systems Inc
Workflow engine for automating business processes in scalable multiprocessor computer platforms first VM second function

datacenter controller third data set

first message first message

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXX
56

US20030097457A1

(Amitabh Saran, 2003)
(Original Assignee) Amitabh Saran; Mathews Manaloor; Arun Maheshwari; Sanjay Suri; Tarak Goradia     Scalable multiprocessor architecture for business computer platforms queue requests exchanging messages

message request message request

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXX
57

US20040019643A1

(Robert Zirnstein, 2004)
(Original Assignee) Canon Inc     

(Current Assignee)
Canon Inc
Remote command server producer worker predetermined location

message request, datacenter queue request email address data

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses sending the message to the intended recipient after parsing the message…

teaches wherein the email server comprises a portion of an…

teaches that the step of responding to said user terminals is performed by transmitting to each of said user terminals…

discloses if the extracted command is instead a request for a web page then command server module selects a function…
XXXXXXXXXXXXXXXXXXX
58

US20020131089A1

(Yoshifumi Tanimoto, 2002)
(Original Assignee) Murata Machinery Ltd     

(Current Assignee)
Murata Machinery Ltd
Internet facsimile machine, and internet facsimile communication method second datacenter location printing instruction

datacenter controller control unit

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches wherein the setting unit sets the second email address to a…

discloses converting the image data from one format to another see paragraph…

teaches a method of messaging read as an e mail sendingreceiving function read as a data communication method for…

teaches causing a work ow engine to suspend a workflow task when received document is associated with an error see…
X
59

CN1437146A

(叶天正, 2003)
(Original Assignee) 国际商业机器公司     撰写、浏览、答复、转发电子邮件的方法和电子邮件客户机 co-located workers 电子邮件系统

first VM 邮件浏览

queue requests 包含的

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the stories for transmission to the end user station are selected on the basis of content of the story and…

teaches wherein the graphical user interface may be operable to display a noti cation alert generated by the…

discloses the referral list is formatted into an SMS application message and is pushed into and appears on the callers…

discloses a communication system and method for pushing electronic messages to a wireless portable device arrival…
XXX
60

EP1347390A1

(K. c/o Future System Consulting Corp. ISHIBASHI, 2003)
(Original Assignee) Future System Consulting Corp     

(Current Assignee)
Future System Consulting Corp
Framework system second datacenter, second datacenter location queue management

first message low definition

queue usage more set

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses all subject matter of the claimed invention as discussed above with respect to claims…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches diverting said email message from delivery to the folder…

discloses a similar method of providing electronic group card in which when a signerparticipant signs the card heshe is…
XXXXXXXXXX
61

US20020120696A1

(Gary Mousseau, 2002)
(Original Assignee) Research in Motion Ltd     

(Current Assignee)
BlackBerry Ltd
System and method for pushing information from a host system to a mobile data communication device first datacenter location corresponding locations

first criterion communication network

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches a detection component coupled to the processor wherein the detection component comprises a sensor for…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a means for determining a position of the mobile unit see col…

discloses the mobile device application platform as claimed in claim…
XXXXXXXX
62

JP2001285287A

(Jerremy Holland, 2001)
(Original Assignee) Agilent Technol Inc; アジレント・テクノロジーズ・インク     プレフィルタリング及びポストフィルタリングを利用したパブリッシュ/サブスクライブ装置及び方法 processing module スクライブ装置

first server, second server クライアント

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the display of a visual indicator that serves to notify a user of an event…

teaches the method further comprising requesting the sender to indicate a priority level of the first message ie…

teaches teach processing the electronic file comprises parsing the electronic file and the address information of the…

teaches a content sharing system in which content of multimedia data on a server is shared with clients of a plurality…
XXXXX
63

US20020120664A1

(Robert Horn, 2002)
(Original Assignee) Aristos Logic Corp     

(Current Assignee)
Aristos Logic Corp
Scalable transaction processing pipeline queue requests logical block address

processing module integrated circuit

second datacenter, second datacenter location queue management

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses methods and systems for managing integration of a heterogeneous application landscape are disclosed and…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

discloses the receiving and routing of a response message by the…
XXXXX
64

KR20000031303A

(박윤경, 2000)
(Original Assignee) 정선종; 한국전자통신연구원     인터네트 전자우편 메시지의 기밀성 유지 방법 second server 클라이언트

first message 수신자가

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the stories for transmission to the end user station are selected on the basis of content of the story and…

teaches wherein the graphical user interface may be operable to display a noti cation alert generated by the…

discloses the referral list is formatted into an SMS application message and is pushed into and appears on the callers…

discloses a communication system and method for pushing electronic messages to a wireless portable device arrival…
XXXXXXX
65

CN102930427A

(潘世行, 2013)
(Original Assignee) Huaqin Telecom Technology Co Ltd     

(Current Assignee)
Huaqin Telecom Technology Co Ltd
日程管理方法及其移动终端 queue requests 请求信息

message request 包括访问

XXXXXXXXX
66

CN102891779A

(徐立人, 2013)
(Original Assignee) BEIJING WRD TECHNOLOGY Co Ltd     

(Current Assignee)
BEIJING WRD TECHNOLOGY Co Ltd
用于ip网络的大规模网络性能测量系统和方法 producer worker, consumer worker 结果进行

second virtual machine, virtual machine manager 周期时间

first server 传输协议

XXXXXXXXXXXXXXX
67

CN102855148A

(胡展鸿, 2013)
(Original Assignee) Guangdong Oppo Mobile Telecommunications Corp Ltd     

(Current Assignee)
Guangdong Oppo Mobile Telecommunications Corp Ltd
一种基于Android的开机管理方法 queue requests, datacenter queue 对应活动

datacenter controller, first datacenter location 数据库

XXXXXXXXXXXXXXX
68

CN102740228A

(底浩, 2012)
(Original Assignee) Beijing Xiaomi Technology Co Ltd     

(Current Assignee)
Beijing Xiaomi Technology Co Ltd
一种位置信息共享方法、装置及系统 second datacenter, second datacenter location 在确定

first datacenter location 标识的

XX
69

CN102591721A

(李江林, 2012)
(Original Assignee) Beijing Feinno Communication Technology Co Ltd     

(Current Assignee)
Beijing Feinno Communication Technology Co Ltd
一种分配线程执行任务的方法和系统 first datacenter 的数量

virtual machine manager 单位时

XXX
70

CN102572316A

(G·科泰, 2012)
(Original Assignee) Apple Computer Inc     

(Current Assignee)
Apple Inc
用于图像信号处理的溢出控制技术 second datacenter 单元施

datacenter queue request 个目的

XXXXX
71

CN102902669A

(吴志祥, 2013)
(Original Assignee) TONGCHENG NETWORK TECHNOLOGY Co Ltd     

(Current Assignee)
TONGCHENG NETWORK TECHNOLOGY Co Ltd
基于互联网系统的分布式信息抓取方法 datacenter controller, first datacenter location 数据库

datacenter queue 由中央

XXXXXXXXXXXXXXX
72

CN102479108A

(孙鹏, 2012)
(Original Assignee) Institute of Acoustics of CAS     

(Current Assignee)
Institute of Acoustics of CAS
一种多应用进程的嵌入式系统终端资源管理系统及方法 virtual machine manager 终端的图像

queue usage 使用状态, 的使用

first datacenter 的数量, 最大数

XXXXXX
73

CN102741843A

(王震, 2012)
(Original Assignee) Qingdao Hisense Media Network Technology Co Ltd     

(Current Assignee)
Juhaokan Technology Co Ltd
从数据库中读取数据的方法及装置 processing module 获取模块

first datacenter location 标识的

XXXXX
74

CN102713847A

(托马斯·R·沃勒, 2012)
(Original Assignee) Advanced Micro Devices Inc     

(Current Assignee)
Advanced Micro Devices Inc
处理器内核的监管程序隔离 processing module 少一个计算

first datacenter, datacenter queue 一个队列

datacenter controller 加速器

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses encoding an image signal into a digitized image signal…

discloses a componentized audio driver comprising an audio lter graph for processing an audio data stream kernel mode…

discloses a mixing system where having global effects such as chorus and reverb that can be applied in varying amounts…

discloses a method wherein a data transmission algorithm is used to ascertain network bandwidth…
XXXXXXXXXXXXXXX
75

KR20120111734A

(케이스 에이. 로웨리, 2012)
(Original Assignee) 어드밴스드 마이크로 디바이시즈, 인코포레이티드     프로세서 코어들의 하이퍼바이저 격리 virtual machine virtual machine

second VMs hypervisor

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses encoding an image signal into a digitized image signal…

discloses a componentized audio driver comprising an audio lter graph for processing an audio data stream kernel mode…

discloses a mixing system where having global effects such as chorus and reverb that can be applied in varying amounts…

discloses a method wherein a data transmission algorithm is used to ascertain network bandwidth…
XXX
76

CN101923491A

(过敏意, 2010)
(Original Assignee) Shanghai Jiaotong University     

(Current Assignee)
Shanghai Jiaotong University
多核环境下线程组地址空间调度和切换线程的方法 queue requests 包含的

queue cache 当线程

XXXX
77

WO2009032493A1

(Paul J. Callaway, 2009)
(Original Assignee) Chicago Mercantile Exchange, Inc.     Dynamic market data filtering datacenter queue request buffering data

producer worker data message

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses that the information can be related to various things which include a portable consumer device such as the…

teaches about wherein the customer preference comprises at least one of a channel through which the customer may be…

discloses that information is received by the system and then compared to the users preferences to determine if an alert…

teaches it is known to send messages to consumers and that those messages include the consumer s name and a message…
XXXXXXXXXXXXX
78

WO2008141900A1

(Nicholas Michael O`Rourke, 2008)
(Original Assignee) International Business Machines Corporation     Virtualized storage performance controller second VMs performance management

producer worker performance data

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches a load monitoring condition determination method for determining a load monitoring condition for performing…

teaches the confidential data segment is divided into the confidential data segments…

discloses a method of tracking internet usage with an addon overlap on top of a web browser…

teaches using a threshold value for an item being monitored…
XXXXXXXXXX




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
USENIX Association Proceedings Of The 2006 USENIX Annual Technical Conference. : 29-42 2006

Publication Year: 2006

High Performance VMM-bypass I/O In Virtual Machines

International Business Machines Corporation

Liu, Huang, Abali, Panda, Usenix
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (virtual machine monitor) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide a requested message upon receiving the message request from the consumer worker .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (virtual machine monitor) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (virtual machine monitor) request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation (datacenter queue request) , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (virtual machine monitor) request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation (datacenter queue request) , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide the requested message upon receiving the message request from the consumer worker .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (virtual machine monitor) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (virtual machine monitor) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (virtual machine monitor) .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (virtual machine monitor) request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation (datacenter queue request) , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (virtual machine monitor) request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation (datacenter queue request) , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide the requested message upon receiving the message request from the consumer worker .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (second datacenter, datacenter queue) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20130014114A1

Filed: 2012-09-12     Issued: 2013-01-10

Information processing apparatus and method for carrying out multi-thread processing

(Original Assignee) Sony Interactive Entertainment Inc     (Current Assignee) Sony Interactive Entertainment Inc

Akihito Nagata
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (storage location, one processor) at a first server sending a first message to a datacenter queue (push module) at least partially stored at a second server ;

storing the first message in a queue cache (write access) at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker (storage location, one processor) at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 21
. An information processing apparatus according to claim 15 , wherein access to the object is read/write access (queue cache) from/to data stored in the shared memory , wherein the information concerning the current state of the object appended to the head pointer is the number of threads that read the data and the number of threads that write the data , and wherein the information concerning access requested by the subsequent thread appended to each pointer is distinguished between read and write .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (storage location, one processor) before storing the first message .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (storage location, one processor) and the consumer worker (storage location, one processor) are co-located on a multi-core device at the first server .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (storage location, one processor) and the consumer worker (storage location, one processor) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker (storage location, one processor) to the datacenter queue (push module) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (push module) is configured to hide a requested message upon receiving the message request from the consumer worker (storage location, one processor) .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (storage location, one processor) at a first server , wherein the producer worker sends a first message to a datacenter queue (push module) at least partially stored at a second server ;

and detect a consumer worker (storage location, one processor) at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (push module) request .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (push module) request .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (storage location, one processor) and the consumer worker (storage location, one processor) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker (storage location, one processor) to the datacenter queue (push module) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (push module) is configured to hide the requested message upon receiving the message request from the consumer worker (storage location, one processor) .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (push module) configured to : detect a producer worker (storage location, one processor) that is executed on a first VM and sends a first message to a datacenter queue (push module) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (write access) at a second datacenter location different from the first ;

detect a consumer worker (storage location, one processor) that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 21
. An information processing apparatus according to claim 15 , wherein access to the object is read/write access (queue cache) from/to data stored in the shared memory , wherein the information concerning the current state of the object appended to the head pointer is the number of threads that read the data and the number of threads that write the data , and wherein the information concerning access requested by the subsequent thread appended to each pointer is distinguished between read and write .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (storage location, one processor) before storing the first message .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (write access) includes one of a copy and a partial copy of the datacenter queue (push module) .
US20130014114A1
CLAIM 21
. An information processing apparatus according to claim 15 , wherein access to the object is read/write access (queue cache) from/to data stored in the shared memory , wherein the information concerning the current state of the object appended to the head pointer is the number of threads that read the data and the number of threads that write the data , and wherein the information concerning access requested by the subsequent thread appended to each pointer is distinguished between read and write .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (push module) request .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (push module) request .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (storage location, one processor) and the consumer worker (storage location, one processor) are co-located on a multi-core device at the first datacenter location .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker (storage location, one processor) to the datacenter queue (push module) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (push module) is configured to hide the requested message upon receiving the message request from the consumer worker (storage location, one processor) .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, datacenter queue request, datacenter controller) operative to place the identity information of the thread into the queue when access is not granted .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20130044749A1

Filed: 2012-03-13     Issued: 2013-02-21

System and method for exchanging information among exchange applications

(Original Assignee) FireStar Software Inc     (Current Assignee) FireStar Software Inc

Mark Eisner, Gabriel Oancea
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (different gateways) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker (including information) at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US20130044749A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (different gateways) sent by the producer worker before storing the first message .
US20130044749A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker and the consumer worker (including information) are co-located on a multi-core device at the first server .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker and the consumer worker (including information) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (different gateways) includes deleting the first message .
US20130044749A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker (including information) to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker (including information) .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (different gateways) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US20130044749A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker and the consumer worker (including information) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker (including information) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (including information) .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (different gateways) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker (including information) that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US20130044749A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (different gateways) sent by the producer worker before storing the first message .
US20130044749A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker (including information) are co-located on a multi-core device at the first datacenter location .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (different gateways) by deleting the first message .
US20130044749A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker (including information) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (including information) .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102668516A

Filed: 2011-12-02     Issued: 2012-09-12

一种云消息服务中实现消息传递的方法和装置

(Original Assignee) Huawei Technologies Co Ltd     (Current Assignee) Huawei Technologies Co Ltd

邓金波, 樊荣, 赵军
US8954993B2
CLAIM 1
. A method to locally process queue requests (包含的) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (实现消息) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN102668516A
CLAIM 1
. ー种云消息服务中实现消息 (first message) 传递的方法,其特征在于,包括, 接收第一分布式程序发送的消息,在分布式Key-Value存储系统中存储所述消息携带的消息数据,并递增所述消息对应的消息队列的发送消息序列号; 接收第二分布式程序读取消息数据的请求,在所述分布式Key-Value存储系统中读取所述消息数据,将读取的所述消息数据发送给所述第二分布式程序,并在所述消息数据为所述消息队列的接收消息序列号对应的消息数据时,递增所述消息队列的接收消息序列号。

CN102668516A
CLAIM 22
. 如权利要求21所述的云消息服务设备,其特征在于,所述消息队列管理単元根据所述消息队列的是否保序的值或所述第二分布式程序读取消息数据的请求中包含的 (queue requests) 參数判断所述第二分布式程序的消息传递是否需要顺序保证。

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (实现消息) sent by the producer worker before storing the first message .
CN102668516A
CLAIM 1
. ー种云消息服务中实现消息 (first message) 传递的方法,其特征在于,包括, 接收第一分布式程序发送的消息,在分布式Key-Value存储系统中存储所述消息携带的消息数据,并递增所述消息对应的消息队列的发送消息序列号; 接收第二分布式程序读取消息数据的请求,在所述分布式Key-Value存储系统中读取所述消息数据,将读取的所述消息数据发送给所述第二分布式程序,并在所述消息数据为所述消息队列的接收消息序列号对应的消息数据时,递增所述消息队列的接收消息序列号。

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (实现消息) includes deleting the first message .
CN102668516A
CLAIM 1
. ー种云消息服务中实现消息 (first message) 传递的方法,其特征在于,包括, 接收第一分布式程序发送的消息,在分布式Key-Value存储系统中存储所述消息携带的消息数据,并递增所述消息对应的消息队列的发送消息序列号; 接收第二分布式程序读取消息数据的请求,在所述分布式Key-Value存储系统中读取所述消息数据,将读取的所述消息数据发送给所述第二分布式程序,并在所述消息数据为所述消息队列的接收消息序列号对应的消息数据时,递增所述消息队列的接收消息序列号。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (包含的) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (实现消息) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102668516A
CLAIM 1
. ー种云消息服务中实现消息 (first message) 传递的方法,其特征在于,包括, 接收第一分布式程序发送的消息,在分布式Key-Value存储系统中存储所述消息携带的消息数据,并递增所述消息对应的消息队列的发送消息序列号; 接收第二分布式程序读取消息数据的请求,在所述分布式Key-Value存储系统中读取所述消息数据,将读取的所述消息数据发送给所述第二分布式程序,并在所述消息数据为所述消息队列的接收消息序列号对应的消息数据时,递增所述消息队列的接收消息序列号。

CN102668516A
CLAIM 22
. 如权利要求21所述的云消息服务设备,其特征在于,所述消息队列管理単元根据所述消息队列的是否保序的值或所述第二分布式程序读取消息数据的请求中包含的 (queue requests) 參数判断所述第二分布式程序的消息传递是否需要顺序保证。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (包含的) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (数据库) configured to : detect a producer worker that is executed on a first VM and sends a first message (实现消息) to a datacenter queue at least partially stored at a first datacenter location (数据库) ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102668516A
CLAIM 1
. ー种云消息服务中实现消息 (first message) 传递的方法,其特征在于,包括, 接收第一分布式程序发送的消息,在分布式Key-Value存储系统中存储所述消息携带的消息数据,并递增所述消息对应的消息队列的发送消息序列号; 接收第二分布式程序读取消息数据的请求,在所述分布式Key-Value存储系统中读取所述消息数据,将读取的所述消息数据发送给所述第二分布式程序,并在所述消息数据为所述消息队列的接收消息序列号对应的消息数据时,递增所述消息队列的接收消息序列号。

CN102668516A
CLAIM 4
. 如权利要求1-3任一所述的方法,其特征在于,所述发送消息序列号和接收消息序列号存储在关系型数据库 (datacenter controller, first datacenter location) 中。

CN102668516A
CLAIM 22
. 如权利要求21所述的云消息服务设备,其特征在于,所述消息队列管理単元根据所述消息队列的是否保序的值或所述第二分布式程序读取消息数据的请求中包含的 (queue requests) 參数判断所述第二分布式程序的消息传递是否需要顺序保证。

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (实现消息) sent by the producer worker before storing the first message .
CN102668516A
CLAIM 1
. ー种云消息服务中实现消息 (first message) 传递的方法,其特征在于,包括, 接收第一分布式程序发送的消息,在分布式Key-Value存储系统中存储所述消息携带的消息数据,并递增所述消息对应的消息队列的发送消息序列号; 接收第二分布式程序读取消息数据的请求,在所述分布式Key-Value存储系统中读取所述消息数据,将读取的所述消息数据发送给所述第二分布式程序,并在所述消息数据为所述消息队列的接收消息序列号对应的消息数据时,递增所述消息队列的接收消息序列号。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter location (数据库) .
CN102668516A
CLAIM 4
. 如权利要求1-3任一所述的方法,其特征在于,所述发送消息序列号和接收消息序列号存储在关系型数据库 (datacenter controller, first datacenter location) 中。

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (实现消息) by deleting the first message .
CN102668516A
CLAIM 1
. ー种云消息服务中实现消息 (first message) 传递的方法,其特征在于,包括, 接收第一分布式程序发送的消息,在分布式Key-Value存储系统中存储所述消息携带的消息数据,并递增所述消息对应的消息队列的发送消息序列号; 接收第二分布式程序读取消息数据的请求,在所述分布式Key-Value存储系统中读取所述消息数据,将读取的所述消息数据发送给所述第二分布式程序,并在所述消息数据为所述消息队列的接收消息序列号对应的消息数据时,递增所述消息队列的接收消息序列号。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20120066177A1

Filed: 2011-11-15     Issued: 2012-03-15

Systems and Methods for Remote Deletion of Contact Information

(Original Assignee) AT&T Mobility II LLC     (Current Assignee) AT&T Mobility II LLC

Scott Swanburg, Andre Okada, Paul Hanson, Chris Young
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (desktop computer, laptop computer) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (message request) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (message request) from the consumer worker to the datacenter queue (desktop computer, laptop computer) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (desktop computer, laptop computer) is configured to hide a requested message upon receiving the message request (message request) from the consumer worker .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (desktop computer, laptop computer) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (desktop computer, laptop computer) request .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (desktop computer, laptop computer) request .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (message request) from the consumer worker to the datacenter queue (desktop computer, laptop computer) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (desktop computer, laptop computer) is configured to hide the requested message upon receiving the message request (message request) from the consumer worker .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (desktop computer, laptop computer) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (desktop computer, laptop computer) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (desktop computer, laptop computer) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (message request) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (desktop computer, laptop computer) .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (desktop computer, laptop computer) request .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (desktop computer, laptop computer) request .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (message request) from the consumer worker to the datacenter queue (desktop computer, laptop computer) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (desktop computer, laptop computer) is configured to hide the requested message upon receiving the message request (message request) from the consumer worker .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
a laptop computer (second datacenter, datacenter queue, datacenter queue request, datacenter controller, second datacenter location) associated with the second user ;
and a tablet computer associated with the second user .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20130036427A1

Filed: 2011-08-03     Issued: 2013-02-07

Message queuing with flexible consistency options

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Han Chen, Minkyong Kim, Hui Lei, Fan Ye
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (time t) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (time t) sent by the producer worker before storing the first message .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (time t) includes deleting the first message .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (time t) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (time t) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (time t) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (readable program) configured to : detect a producer worker that is executed on a first VM and sends a first message (time t) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US20130036427A1
CLAIM 15
. A computer program product for managing message queuing , the computer program product comprising a computer readable storage medium having computer readable program (datacenter controller) code embodied therewith , the computer readable program code comprising computer readable program code configured to perform a method comprising : receiving a request from an application for retrieving a message from a queue stored across multiple nodes of the distributed storage system ;
identifying a preference associated with the queue with respect to message order and message duplication ;
sampling a message sequence index associated with the queue based on the preference that has been identified ;
selecting , in response to the sampling , the message ;
making the message that has been selected unavailable to other applications for a given interval of time , while maintaining the message in the queue ;
and sending the message to the application .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (time t) sent by the producer worker before storing the first message .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (time t) by deleting the first message .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (time t) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20130036427A1
CLAIM 8
. The method of claim 1 , wherein the message sequence index is a list of tuples in the form of (id , handle , timestamp) , where id is an identifier of a message , handle is a unique number associated with the message , and timestamp is a time t (first message, first criterion) hat the message is available for retrieval .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20120117167A1

Filed: 2011-05-16     Issued: 2012-05-10

System and method for providing recommendations to a user in a viewing social network

(Original Assignee) Sony Corp     (Current Assignee) Sony Corp

Aran Sadja, Jeffrey Tang, Bryan Mihalov, Ludovic Douillet, Nobukazu Sugiyama
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (more servers) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue (more servers) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (more servers) is configured to hide a requested message upon receiving the message request from the consumer worker .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (more servers) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (more servers) request .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (more servers) request .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue (more servers) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (more servers) is configured to hide the requested message upon receiving the message request from the consumer worker .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (readable program, more processor) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (more servers) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (specific media) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20120117167A1
CLAIM 7
. The method of claim 6 , further comprising : querying the at least one local connection for recommendation data , wherein the recommendation data comprises one of current media being viewed by the at least one local connection , local media preferences associated with the local connection and specific media (second VM) recommendations by the local connection ;
receiving the recommendation data .

US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processor (datacenter controller) s for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US20120117167A1
CLAIM 20
. A tangible non-transitory computer readable medium storing one or more computer readable program (datacenter controller) s adapted to cause a processor based system to execute steps comprising : initiating communication with asocial networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (more servers) .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (more servers) request .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (more servers) request .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue (more servers) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (more servers) is configured to hide the requested message upon receiving the message request from the consumer worker .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20120117144A1

Filed: 2011-05-16     Issued: 2012-05-10

System and method for creating a viewing social network

(Original Assignee) Sony Corp     (Current Assignee) Sony Corp

Ludovic Douillet, Bryan Mihalov, Aran Sadja, Nobukazu Sugiyama, Jeffrey Tang
US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (readable program) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (user selection) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20120117144A1
CLAIM 4
. The method of claim 1 , wherein the establishing direct communication comprises : notifying the user that the one or more local connections have been detected ;
and receiving a user selection (second VM) of at least one of the one or more local connections from the user .

US20120117144A1
CLAIM 20
. A tangible non-transitory computer readable medium storing one or more computer readable program (datacenter controller) s adapted to cause a processor based system to execute steps comprising : detecting a user operating a first client device at an intermediary server , wherein the intermediary server is communicatively coupled to one or more client devices including the first client device and further communicatively coupled to one or more other intermediary servers each communicatively coupled with one or more other client devices ;
establishing communication with at least one social networking server maintaining information corresponding to the user , the information comprising one or more of user preferences , a plurality of user connections , and user connection preferences corresponding to each of the plurality of user connections ;
querying the at least one social networking server for the information ;
receiving the information ;
and generating a local viewing social network for the user comprising : generating a user profile according to the information ;
detecting one or more local connections of the plurality of user connections operating a client device of the one or more client devices or the one or more other client devices ;
and establishing direct communication between the user and at least one of the one or more local connections .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20110208796A1

Filed: 2011-05-05     Issued: 2011-08-25

Using distributed queues in an overlay network

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

John Reed Riley, David A. Wortendyke, Michael J. Marucheck
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache (system memory) at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20110208796A1
CLAIM 1
. At a node in an overlay network , the node including a processor and system memory (queue cache, queue cache includes one) , the overly network including a plurality nodes , each node in the plurality of nodes being assigned responsibility for a range of identifiers on the overlay network , a method for replicating queue state within the overlay network , the method comprising : an act of accessing data for a process at the node , the node including a process runtime for running the process and a queue for queuing data for the process , the process runtime and the queue co-located within the process at the node ;
an act of performing a queue related operation on the accessed data , the queue related operation selected from among : queueing the accessed data in the queue and dequeueing the accessed data from the queue ;
an act of altering the queue state for the queue in response to the queue related operation ;
and an act of replicating the altered queue state for the queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that the altered queue state is available at any of the plurality of other nodes that is subsequently assigned responsibility for the process .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (more processor) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (system memory) at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20110208796A1
CLAIM 1
. At a node in an overlay network , the node including a processor and system memory (queue cache, queue cache includes one) , the overly network including a plurality nodes , each node in the plurality of nodes being assigned responsibility for a range of identifiers on the overlay network , a method for replicating queue state within the overlay network , the method comprising : an act of accessing data for a process at the node , the node including a process runtime for running the process and a queue for queuing data for the process , the process runtime and the queue co-located within the process at the node ;
an act of performing a queue related operation on the accessed data , the queue related operation selected from among : queueing the accessed data in the queue and dequeueing the accessed data from the queue ;
an act of altering the queue state for the queue in response to the queue related operation ;
and an act of replicating the altered queue state for the queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that the altered queue state is available at any of the plurality of other nodes that is subsequently assigned responsibility for the process .

US20110208796A1
CLAIM 17
. A network system , the network system comprising : an overlay network , the overlay network including a plurality of nodes , each node in the plurality of nodes being assigned responsibility for a range of identifiers on the overlay network ;
each node including one or more processor (datacenter controller) s , system memory , and one or more computer storage devices having stored thereon computer-executable instructions that , when executed at the one or more processors , cause the node to : access data for a workflow process at the node , the node including a process runtime for running the workflow process and a queue for queuing data for the workflow process , the process runtime and the queue co-located within the workflow process at the node ;
perform a queue related operation on the accessed data , the queue related operation selected from among : queueing the accessed data in the queue and dequeueing the accessed data from the queue ;
alter the queue state for the queue in response to the queue related operation ;
and replicate the altered queue state for the queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that the altered queue state is available at any of the plurality of other nodes that is subsequently assigned responsibility for the workflow process .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (system memory) includes one of a copy and a partial copy of the datacenter queue .
US20110208796A1
CLAIM 1
. At a node in an overlay network , the node including a processor and system memory (queue cache, queue cache includes one) , the overly network including a plurality nodes , each node in the plurality of nodes being assigned responsibility for a range of identifiers on the overlay network , a method for replicating queue state within the overlay network , the method comprising : an act of accessing data for a process at the node , the node including a process runtime for running the process and a queue for queuing data for the process , the process runtime and the queue co-located within the process at the node ;
an act of performing a queue related operation on the accessed data , the queue related operation selected from among : queueing the accessed data in the queue and dequeueing the accessed data from the queue ;
an act of altering the queue state for the queue in response to the queue related operation ;
and an act of replicating the altered queue state for the queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that the altered queue state is available at any of the plurality of other nodes that is subsequently assigned responsibility for the process .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100185665A1

Filed: 2010-01-21     Issued: 2010-07-22

Office-Based Notification Messaging System

(Original Assignee) SUNSTEIN KANN MURPHY AND TIMBERS LLP     (Current Assignee) SUNSTEIN KANN MURPHY AND TIMBERS LLP

Monroe Horn, Rory A. Apperson, Andrei S. MacKenzie
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (message recipients) at a first server sending a first message (time t) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (time t) sent by the producer worker (message recipients) before storing the first message .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (message recipients) and the consumer worker are co-located on a multi-core device at the first server .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (message recipients) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (time t) includes deleting the first message .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (time t) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (message recipients) at a first server , wherein the producer worker sends a first message (time t) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (message recipients) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (time t) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (readable program) configured to : detect a producer worker (message recipients) that is executed on a first VM and sends a first message (time t) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location (time interval) different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100185665A1
CLAIM 8
. A method according to claim 1 , wherein creating an office notification message including an expiration comprises selecting an expiration , which includes at least one of : selecting a time interval (second datacenter location) after the notification message is sent ;
and receiving a user input specifying a time .

US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US20100185665A1
CLAIM 19
. A computer program product for providing an office notification system on an office-based computer system , the computer program product comprising a non-transitory computer-readable storage medium having computer readable program (datacenter controller) code stored thereon , the computer readable program code comprising : program code for prompting a computer system user to check-in upon system login ;
program code for maintaining a list of checked-in users ;
program code for creating an office notification message including a recipient group and an expiration ;
program code for sending the created notification message to members of the recipient group who are checked-in ;
and program code for deleting the notification message from message recipient mailboxes after the notification message has expired .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (time t) sent by the producer worker (message recipients) before storing the first message .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (message recipients) and the consumer worker are co-located on a multi-core device at the first datacenter location .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (time t) by deleting the first message .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (time t) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20100185665A1
CLAIM 14
. A method according to claim 12 , further comprising automatically setting the reply message expiration based at least in part on a time t (first message, first criterion) hat the reply message is sent .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20110138400A1

Filed: 2009-12-03     Issued: 2011-06-09

Automated merger of logically associated messages in a message queue

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Allan T. Chandler, Bret W. Dixon
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (one processor) before storing the first message .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (one processor) and the consumer worker are co-located on a multi-core device at the first server .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (one processor) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (one processor) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (one processor) that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (one processor) before storing the first message .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (one processor) and the consumer worker are co-located on a multi-core device at the first datacenter location .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US8954993B2
CLAIM 20
. The datacenter of claim 14 , wherein the first and second VMs (host computing platform) are configured to execute on the same physical machine .
US20110138400A1
CLAIM 1
. A method for message merging in a messaging queue , the method comprising : receiving a request to add a new message to a message queue in a message queue manager executing in memory by a processor of a host computing platform (second VMs) ;
identifying an association key associating the new message with an existing message in the message queue ;
locating an associated message in the message corresponding to the identified association key ;
and , merging the new message with the located associated message in the message queue .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100010671A1

Filed: 2009-07-06     Issued: 2010-01-14

Information processing system, information processing method, robot control system, robot control method, and computer program

(Original Assignee) Sony Corp     (Current Assignee) Sony Corp

Atsushi Miyamoto
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (respective processes) at a first server sending a first message (exchange messages) to a datacenter queue (different computer) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (reception information) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20100010671A1
CLAIM 2
. The information processing system according to claim 1 , wherein the intermodule communication means is arranged for each process and includes a message broker used to exchange messages (first message) .

US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (exchange messages) sent by the producer worker (respective processes) before storing the first message .
US20100010671A1
CLAIM 2
. The information processing system according to claim 1 , wherein the intermodule communication means is arranged for each process and includes a message broker used to exchange messages (first message) .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (respective processes) and the consumer worker are co-located on a multi-core device at the first server .
US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (respective processes) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (exchange messages) includes deleting the first message .
US20100010671A1
CLAIM 2
. The information processing system according to claim 1 , wherein the intermodule communication means is arranged for each process and includes a message broker used to exchange messages (first message) .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (reception information) from the consumer worker to the datacenter queue (different computer) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (different computer) is configured to hide a requested message upon receiving the message request (reception information) from the consumer worker .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (respective processes) at a first server , wherein the producer worker sends a first message (exchange messages) to a datacenter queue (different computer) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (reception information) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100010671A1
CLAIM 2
. The information processing system according to claim 1 , wherein the intermodule communication means is arranged for each process and includes a message broker used to exchange messages (first message) .

US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (different computer) request .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (different computer) request .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (respective processes) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (reception information) from the consumer worker to the datacenter queue (different computer) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (different computer) is configured to hide the requested message upon receiving the message request (reception information) from the consumer worker .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (readable program) configured to : detect a producer worker (respective processes) that is executed on a first VM and sends a first message (exchange messages) to a datacenter queue (different computer) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (reception information) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100010671A1
CLAIM 2
. The information processing system according to claim 1 , wherein the intermodule communication means is arranged for each process and includes a message broker used to exchange messages (first message) .

US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 15
. A computer-readable program (datacenter controller) which includes a plurality of modules and which is to be executed in a computer , the computer program makes the computer function as : synchronous processing means for executing a group of modules to perform synchronous real-time processing in a single process serving as a unit of execution of a program ;
parallel processing means for arranging modules which allow asynchronous processing and which should perform parallel processing in different processes and executing the modules in parallel ;
and intermodule communication means for performing transmission and reception of data among the modules by means of message passing .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (exchange messages) sent by the producer worker (respective processes) before storing the first message .
US20100010671A1
CLAIM 2
. The information processing system according to claim 1 , wherein the intermodule communication means is arranged for each process and includes a message broker used to exchange messages (first message) .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (different computer) .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (different computer) request .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (different computer) request .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (respective processes) and the consumer worker are co-located on a multi-core device at the first datacenter location .
US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (exchange messages) by deleting the first message .
US20100010671A1
CLAIM 2
. The information processing system according to claim 1 , wherein the intermodule communication means is arranged for each process and includes a message broker used to exchange messages (first message) .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (reception information) from the consumer worker to the datacenter queue (different computer) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (different computer) is configured to hide the requested message upon receiving the message request (reception information) from the consumer worker .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP2449849A1

Filed: 2009-06-29     Issued: 2012-05-09

Resource allocation

(Original Assignee) Nokia Oyj     (Current Assignee) Nokia Oyj

Harsh Jahagirdar
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (one processor) before storing the first message .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (one processor) and the consumer worker are co-located on a multi-core device at the first server .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (one processor) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (media resource) .
EP2449849A1
CLAIM 13
. The method according to any preceding claim wherein said resources are multimedia resource (datacenter queue request) s and include at least one media source , media effect , decoders , encoder or media sink .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (media resource) .
EP2449849A1
CLAIM 13
. The method according to any preceding claim wherein said resources are multimedia resource (datacenter queue request) s and include at least one media source , media effect , decoders , encoder or media sink .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (one processor) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (one processor) that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (one processor) before storing the first message .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (media resource) .
EP2449849A1
CLAIM 13
. The method according to any preceding claim wherein said resources are multimedia resource (datacenter queue request) s and include at least one media source , media effect , decoders , encoder or media sink .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (media resource) .
EP2449849A1
CLAIM 13
. The method according to any preceding claim wherein said resources are multimedia resource (datacenter queue request) s and include at least one media source , media effect , decoders , encoder or media sink .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (one processor) and the consumer worker are co-located on a multi-core device at the first datacenter location .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100325190A1

Filed: 2009-06-23     Issued: 2010-12-23

Using distributed queues in an overlay network

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

John Reed Riley, David A. Wortendyke, Michael J. Marucheck
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (time t) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache (system memory) at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20100325190A1
CLAIM 1
. At a node in an overlay network , the node including a processor and system memory (queue cache, queue cache includes one) , the overly network including a plurality nodes , each node in the plurality of nodes being assigned responsibility for a range of identifiers on the overlay network , a method for replicating queue state within the overlay network , the method comprising : an act of receiving data for a process at the node , the node including a process runtime for running the process and a queue for queuing data for the process , the process runtime and the queue co-located within the process at the node , the node being assigned responsibility responsible for a specified range of identifiers on the overlay network , the workflow identified by an identifier within the specified range of identifiers ;
an act of queuing the received data in the queue ;
an act of altering the queue state for the queue in response to queueing the received data ;
an act of replicating the altered queue state for the queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the process is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a process runtime at the reassigned node ;
an act of dequeing the received data from the queue to the process runtime within the process ;
and an act of the process runtime processing the received data to perform some work .

US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (time t) sent by the producer worker before storing the first message .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (time t) includes deleting the first message .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (time t) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (time t) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (time t) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (more processor) configured to : detect a producer worker that is executed on a first VM and sends a first message (time t) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (system memory) at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100325190A1
CLAIM 1
. At a node in an overlay network , the node including a processor and system memory (queue cache, queue cache includes one) , the overly network including a plurality nodes , each node in the plurality of nodes being assigned responsibility for a range of identifiers on the overlay network , a method for replicating queue state within the overlay network , the method comprising : an act of receiving data for a process at the node , the node including a process runtime for running the process and a queue for queuing data for the process , the process runtime and the queue co-located within the process at the node , the node being assigned responsibility responsible for a specified range of identifiers on the overlay network , the workflow identified by an identifier within the specified range of identifiers ;
an act of queuing the received data in the queue ;
an act of altering the queue state for the queue in response to queueing the received data ;
an act of replicating the altered queue state for the queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the process is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a process runtime at the reassigned node ;
an act of dequeing the received data from the queue to the process runtime within the process ;
and an act of the process runtime processing the received data to perform some work .

US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processor (datacenter controller) s ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (time t) sent by the producer worker before storing the first message .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (system memory) includes one of a copy and a partial copy of the datacenter queue .
US20100325190A1
CLAIM 1
. At a node in an overlay network , the node including a processor and system memory (queue cache, queue cache includes one) , the overly network including a plurality nodes , each node in the plurality of nodes being assigned responsibility for a range of identifiers on the overlay network , a method for replicating queue state within the overlay network , the method comprising : an act of receiving data for a process at the node , the node including a process runtime for running the process and a queue for queuing data for the process , the process runtime and the queue co-located within the process at the node , the node being assigned responsibility responsible for a specified range of identifiers on the overlay network , the workflow identified by an identifier within the specified range of identifiers ;
an act of queuing the received data in the queue ;
an act of altering the queue state for the queue in response to queueing the received data ;
an act of replicating the altered queue state for the queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the process is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a process runtime at the reassigned node ;
an act of dequeing the received data from the queue to the process runtime within the process ;
and an act of the process runtime processing the received data to perform some work .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (time t) by deleting the first message .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (time t) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20100325190A1
CLAIM 18
. An overlay ring network based on a distributed hash table , the overlay ring network including a plurality of peer nodes , each peer node assigned responsibility for a range of identifiers within the distributed hash table , each node including : system memory ;
one or more processors ;
and one or more computer storage media having stored thereon computer executable instructions that , when executed at one of the one or more processors , cause the node to participate in workflow processing within the overlay network , including each node being configured to a) initiate a process for a workflow , b) progress through a workflow as the node assigned responsibility for the workflow , and c) transition to the node assigned responsibility for a workflow , wherein a) initiating a process for a workflow includes : locally activating a workflow process for a workflow at the node ;
and co-locating a queue and a workflow runtime for the workflow within a locally activated process ;
b) progressing through a workflow as the node assigned responsibility for the workflow includes : receiving data for the workflow at the node , the workflow identified by an identifier within the specified range of identifiers ;
queuing the received data in a workflow queue for the workflow ;
altering the queue state for the workflow queue in response to queueing the received data ;
replicating the altered queue state for the workflow queue to a plurality of other nodes on the overlay network , replicating the altered queue state increasing the availability of the altered queue state such that if responsibility for the workflow is subsequently reassigned to one of the plurality of other nodes , the altered queue state is available to a workflow runtime at the reassigned node ;
dequeing the received data from the workflow queue to the workflow runtime within the process ;
processing the received data at the workflow runtime t (first message, first criterion) o perform some work ;
further altering the queue state in response to the received data being dequeued ;
and subsequent to successful performance of the work , replicating the further altered queue state for the workflow queue to the plurality of other nodes on the overlay network , replication subsequent to successful performance of the work helping insure that the plurality of other nodes retain appropriate replicated queue state in the event performance of the work is unsuccessful ;
and c) transitioning to the node assigned responsibility for a workflow includes : receiving replicated queue state for a workflow queue at another node on the overlay network , the replicated queue state representing that a workflow has been partially completed at the other node , the replicated queue state including an identifier that identifies the workflow within the distributed hash table , the identifier being outside the specified range of identifiers assigned to the node ;
detecting a change in the node configuration on the overly network subsequent to receiving the replicated queue state ;
updating the specified range of identifiers for the node based on the detected change in node configuration , the update to the specified range of identifiers changing the assigned responsibilities for the node ;
determining that the identifier identifying the workflow is within the updated specified range of identifiers such that the node has been assigned responsibility for the workflow in view of the changed node configuration ;
utilizing the replicated queue state to adjust the state of a local workflow queue ;
and processing data from the local workflow queue to continue the workflow from the point of partial completion reached at the other node based on the replicated queue state .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100325219A1

Filed: 2009-06-22     Issued: 2010-12-23

Adding configurable messaging functionality to an infrastructure

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Clemens F. Vasters, David A. Wortendyke
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache (system memory) at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20100325219A1
CLAIM 1
. At a computer system , the computer system including one or more processors and system memory (queue cache, queue cache includes one) , a method for adding messaging functionality to an overlay network , the method comprising : an act of accessing a hierarchical representation of a namespace , the namespace representing the overlay network ;
an act of identifying a portion of the namespace where the messaging related functionality is to be installed ;
and an act of installing the messaging related functionality into the namespace at the identified portion of the namespace , installation including : an act of identifying hardware components that are to be used to implement the messaging related functionality ;
and an act of sending communication to the overlay network to implement the messaging related functionality on the hardware components , including : an act of setting up the hardware components to operate within the namespace ;
and an act of requesting that the overlay network configure the hardware components with behaviors for implementing the messaging related functionality in accordance with a specified policy in combination with setting up the hardware , setting up the hardware components in combination with requesting configuration of the hardware components performed in a unified manner such that setup and configuration of the hardware components are both essentially simultaneously performed through interacting with the namespace .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (more processor) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (system memory) at a second datacenter location (external hardware) different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100325219A1
CLAIM 1
. At a computer system , the computer system including one or more processor (datacenter controller) s and system memory (queue cache, queue cache includes one) , a method for adding messaging functionality to an overlay network , the method comprising : an act of accessing a hierarchical representation of a namespace , the namespace representing the overlay network ;
an act of identifying a portion of the namespace where the messaging related functionality is to be installed ;
and an act of installing the messaging related functionality into the namespace at the identified portion of the namespace , installation including : an act of identifying hardware components that are to be used to implement the messaging related functionality ;
and an act of sending communication to the overlay network to implement the messaging related functionality on the hardware components , including : an act of setting up the hardware components to operate within the namespace ;
and an act of requesting that the overlay network configure the hardware components with behaviors for implementing the messaging related functionality in accordance with a specified policy in combination with setting up the hardware , setting up the hardware components in combination with requesting configuration of the hardware components performed in a unified manner such that setup and configuration of the hardware components are both essentially simultaneously performed through interacting with the namespace .

US20100325219A1
CLAIM 6
. The method as recited in claim 5 , further comprising an act of installing a proxy at the identified a portion of the namespace to forward messages to the external hardware (second datacenter location) components .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (system memory) includes one of a copy and a partial copy of the datacenter queue .
US20100325219A1
CLAIM 1
. At a computer system , the computer system including one or more processors and system memory (queue cache, queue cache includes one) , a method for adding messaging functionality to an overlay network , the method comprising : an act of accessing a hierarchical representation of a namespace , the namespace representing the overlay network ;
an act of identifying a portion of the namespace where the messaging related functionality is to be installed ;
and an act of installing the messaging related functionality into the namespace at the identified portion of the namespace , installation including : an act of identifying hardware components that are to be used to implement the messaging related functionality ;
and an act of sending communication to the overlay network to implement the messaging related functionality on the hardware components , including : an act of setting up the hardware components to operate within the namespace ;
and an act of requesting that the overlay network configure the hardware components with behaviors for implementing the messaging related functionality in accordance with a specified policy in combination with setting up the hardware , setting up the hardware components in combination with requesting configuration of the hardware components performed in a unified manner such that setup and configuration of the hardware components are both essentially simultaneously performed through interacting with the namespace .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090228564A1

Filed: 2009-03-04     Issued: 2009-09-10

Electronic mail forwarding service

(Original Assignee) AOL Inc     (Current Assignee) Verizon Media Inc

Keith Martin Hamburg
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (email document) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20090228564A1
CLAIM 9
. A computer-implemented method of providing email message forwarding functionality , the method comprising : receiving and storing an original email message addressed to an original email address of a user , said original email message including a body section containing message content ;
generating a forwarding email document (first message) comprising a summary of the original email message and a unique link for accessing the original email message , said forwarding email document lacking at least a portion of said message content of the original email message ;
and sending the forwarding email document to a forwarding email address of a user .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (email document) sent by the producer worker before storing the first message .
US20090228564A1
CLAIM 9
. A computer-implemented method of providing email message forwarding functionality , the method comprising : receiving and storing an original email message addressed to an original email address of a user , said original email message including a body section containing message content ;
generating a forwarding email document (first message) comprising a summary of the original email message and a unique link for accessing the original email message , said forwarding email document lacking at least a portion of said message content of the original email message ;
and sending the forwarding email document to a forwarding email address of a user .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (email document) includes deleting the first message .
US20090228564A1
CLAIM 9
. A computer-implemented method of providing email message forwarding functionality , the method comprising : receiving and storing an original email message addressed to an original email address of a user , said original email message including a body section containing message content ;
generating a forwarding email document (first message) comprising a summary of the original email message and a unique link for accessing the original email message , said forwarding email document lacking at least a portion of said message content of the original email message ;
and sending the forwarding email document to a forwarding email address of a user .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (email document) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20090228564A1
CLAIM 9
. A computer-implemented method of providing email message forwarding functionality , the method comprising : receiving and storing an original email message addressed to an original email address of a user , said original email message including a body section containing message content ;
generating a forwarding email document (first message) comprising a summary of the original email message and a unique link for accessing the original email message , said forwarding email document lacking at least a portion of said message content of the original email message ;
and sending the forwarding email document to a forwarding email address of a user .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (email document) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (user selection) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20090228564A1
CLAIM 3
. The method of claim 1 , further comprising , in response to user selection (second VM) of said link , returning said web page that displays the first email message without requiring the user to log in .

US20090228564A1
CLAIM 9
. A computer-implemented method of providing email message forwarding functionality , the method comprising : receiving and storing an original email message addressed to an original email address of a user , said original email message including a body section containing message content ;
generating a forwarding email document (first message) comprising a summary of the original email message and a unique link for accessing the original email message , said forwarding email document lacking at least a portion of said message content of the original email message ;
and sending the forwarding email document to a forwarding email address of a user .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (email document) sent by the producer worker before storing the first message .
US20090228564A1
CLAIM 9
. A computer-implemented method of providing email message forwarding functionality , the method comprising : receiving and storing an original email message addressed to an original email address of a user , said original email message including a body section containing message content ;
generating a forwarding email document (first message) comprising a summary of the original email message and a unique link for accessing the original email message , said forwarding email document lacking at least a portion of said message content of the original email message ;
and sending the forwarding email document to a forwarding email address of a user .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (email document) by deleting the first message .
US20090228564A1
CLAIM 9
. A computer-implemented method of providing email message forwarding functionality , the method comprising : receiving and storing an original email message addressed to an original email address of a user , said original email message including a body section containing message content ;
generating a forwarding email document (first message) comprising a summary of the original email message and a unique link for accessing the original email message , said forwarding email document lacking at least a portion of said message content of the original email message ;
and sending the forwarding email document to a forwarding email address of a user .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100161753A1

Filed: 2008-12-19     Issued: 2010-06-24

Method and communication device for processing data for transmission from the communication device to a second communication device

(Original Assignee) Research in Motion Ltd     (Current Assignee) BlackBerry Ltd

Gerhard Dietrich Klassen, Robert Edwards
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (instant messaging) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (instant messaging) from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (communication network) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100161753A1
CLAIM 7
. The method of claim 6 , wherein said determining if said storage device is accessible to said second communication device comprises determining if said second communication device and said storage device are each associated with a same communication network (first criterion) .

US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request (instant messaging) from the consumer worker .
US20100161753A1
CLAIM 7
. The method of claim 6 , wherein said determining if said storage device is accessible to said second communication device comprises determining if said second communication device and said storage device are each associated with a same communication network (first criterion) .

US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (instant messaging) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (instant messaging) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (communication network) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100161753A1
CLAIM 7
. The method of claim 6 , wherein said determining if said storage device is accessible to said second communication device comprises determining if said second communication device and said storage device are each associated with a same communication network (first criterion) .

US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (instant messaging) from the consumer worker .
US20100161753A1
CLAIM 7
. The method of claim 6 , wherein said determining if said storage device is accessible to said second communication device comprises determining if said second communication device and said storage device are each associated with a same communication network (first criterion) .

US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (instant messaging) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (instant messaging) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (communication network) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20100161753A1
CLAIM 7
. The method of claim 6 , wherein said determining if said storage device is accessible to said second communication device comprises determining if said second communication device and said storage device are each associated with a same communication network (first criterion) .

US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (instant messaging) from the consumer worker .
US20100161753A1
CLAIM 7
. The method of claim 6 , wherein said determining if said storage device is accessible to said second communication device comprises determining if said second communication device and said storage device are each associated with a same communication network (first criterion) .

US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2010020650A

Filed: 2008-07-14     Issued: 2010-01-28

情報処理システム及び情報処理方法、ロボットの制御システム及び制御方法、並びコンピュータ・プログラム

(Original Assignee) Sony Corp; ソニー株式会社     

Atsushi Miyamoto, 敦史 宮本
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (送信メッセージ, 受信メッセージ) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージ (first message) の一覧を取得する関数及び受信メッセージ (first message) の一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュールに関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (送信メッセージ, 受信メッセージ) sent by the producer worker before storing the first message .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージ (first message) の一覧を取得する関数及び受信メッセージ (first message) の一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュールに関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (送信メッセージ, 受信メッセージ) includes deleting the first message .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージ (first message) の一覧を取得する関数及び受信メッセージ (first message) の一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュールに関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module (受信モジュール, 送信モジュール, 他のモジュール) configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (送信メッセージ, 受信メッセージ) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module (受信モジュール, 送信モジュール, 他のモジュール) configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
JP2010020650A
CLAIM 4
各モジュールは、他のモジュール (queue usage detector module, processing module) から受信したメッセージを一時的に保管するメッセージ・キューを装備し、メッセージの送信先となる場合において、送信元となるモジュールが同じプロセス内にないときには受信したメッセージを前記メッセージ・キューに一時保管するが、送信元となるモジュールが同じプロセス内にあるときには受信したメッセージの前記メッセージ・キューへの保管を省略する、 ことを特徴とする請求項1に記載の情報処理システム。

JP2010020650A
CLAIM 8
各モジュールは、送信メッセージ (first message) の一覧を取得する関数及び受信メッセージ (first message) の一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール (queue usage detector module, processing module) 及び受信モジュール (queue usage detector module, processing module) に関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module (受信モジュール, 送信モジュール, 他のモジュール) is further configured to build a table of queue usage based on at least one observed datacenter queue request (する手段) .
JP2010020650A
CLAIM 4
各モジュールは、他のモジュール (queue usage detector module, processing module) から受信したメッセージを一時的に保管するメッセージ・キューを装備し、メッセージの送信先となる場合において、送信元となるモジュールが同じプロセス内にないときには受信したメッセージを前記メッセージ・キューに一時保管するが、送信元となるモジュールが同じプロセス内にあるときには受信したメッセージの前記メッセージ・キューへの保管を省略する、 ことを特徴とする請求項1に記載の情報処理システム。

JP2010020650A
CLAIM 8
各モジュールは、送信メッセージの一覧を取得する関数及び受信メッセージの一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール (queue usage detector module, processing module) 及び受信モジュール (queue usage detector module, processing module) に関するメッセージ送受信情報を収集する手段 (datacenter queue request) と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module (受信モジュール, 送信モジュール, 他のモジュール) is further configured to observe the at least one observed datacenter queue request (する手段) .
JP2010020650A
CLAIM 4
各モジュールは、他のモジュール (queue usage detector module, processing module) から受信したメッセージを一時的に保管するメッセージ・キューを装備し、メッセージの送信先となる場合において、送信元となるモジュールが同じプロセス内にないときには受信したメッセージを前記メッセージ・キューに一時保管するが、送信元となるモジュールが同じプロセス内にあるときには受信したメッセージの前記メッセージ・キューへの保管を省略する、 ことを特徴とする請求項1に記載の情報処理システム。

JP2010020650A
CLAIM 8
各モジュールは、送信メッセージの一覧を取得する関数及び受信メッセージの一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール (queue usage detector module, processing module) 及び受信モジュール (queue usage detector module, processing module) に関するメッセージ送受信情報を収集する手段 (datacenter queue request) と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module (受信モジュール, 送信モジュール, 他のモジュール) is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
JP2010020650A
CLAIM 4
各モジュールは、他のモジュール (queue usage detector module, processing module) から受信したメッセージを一時的に保管するメッセージ・キューを装備し、メッセージの送信先となる場合において、送信元となるモジュールが同じプロセス内にないときには受信したメッセージを前記メッセージ・キューに一時保管するが、送信元となるモジュールが同じプロセス内にあるときには受信したメッセージの前記メッセージ・キューへの保管を省略する、 ことを特徴とする請求項1に記載の情報処理システム。

JP2010020650A
CLAIM 8
各モジュールは、送信メッセージの一覧を取得する関数及び受信メッセージの一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール (queue usage detector module, processing module) 及び受信モジュール (queue usage detector module, processing module) に関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (送信メッセージ, 受信メッセージ) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージ (first message) の一覧を取得する関数及び受信メッセージ (first message) の一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュールに関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (送信メッセージ, 受信メッセージ) sent by the producer worker before storing the first message .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージ (first message) の一覧を取得する関数及び受信メッセージ (first message) の一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュールに関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (する手段) .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージの一覧を取得する関数及び受信メッセージの一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュールに関するメッセージ送受信情報を収集する手段 (datacenter queue request) と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (する手段) .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージの一覧を取得する関数及び受信メッセージの一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュールに関するメッセージ送受信情報を収集する手段 (datacenter queue request) と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (送信メッセージ, 受信メッセージ) by deleting the first message .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージ (first message) の一覧を取得する関数及び受信メッセージ (first message) の一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュールに関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090249357A1

Filed: 2008-07-02     Issued: 2009-10-01

Systems and methods for inter process communication based on queues

(Original Assignee) VMware Inc     (Current Assignee) VMware Inc

Anupam Chanda, Kevin Scott CHRISTOPHER, Jeremy SUGERMAN, Petr Vandrovec, Gustav Seth WIBLING
US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter (readable media) location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (virtual machines) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20090249357A1
CLAIM 4
. The method as recited in claim 3 , wherein update of one of the head pointer by the second virtual machines (second VM, second VMs) causes a second page fault , wherein handling of the second page fault includes updating a first head pointer in the first queue .

US20090249357A1
CLAIM 6
. A computer readable media (first datacenter) for storing programming instructions for data communication between a first virtual machine and a second virtual machine , the computer readable media comprising : programming instructions for copying data from the first virtual machine to a first queue , the first queue being configured to receive the data from the first virtual machine , the first queue having a first queue header section and a first queue data section , the first queue header being read protected and configured to store a tail pointer that points to an end of the data in the first queue ;
programming instructions for updating the tail pointer in the first header section , wherein the update of the tail pointer causes a page fault ;
and programming instructions for handling the page fault through a page fault handler , the handling includes copying the data from the first queue to a second queue , the second queue being configured to receive a copy of the data and to allow the second virtual machine to access the copy of the data , wherein , the second virtual machine is executing in a record/replay mode .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter (readable media) location .
US20090249357A1
CLAIM 6
. A computer readable media (first datacenter) for storing programming instructions for data communication between a first virtual machine and a second virtual machine , the computer readable media comprising : programming instructions for copying data from the first virtual machine to a first queue , the first queue being configured to receive the data from the first virtual machine , the first queue having a first queue header section and a first queue data section , the first queue header being read protected and configured to store a tail pointer that points to an end of the data in the first queue ;
programming instructions for updating the tail pointer in the first header section , wherein the update of the tail pointer causes a page fault ;
and programming instructions for handling the page fault through a page fault handler , the handling includes copying the data from the first queue to a second queue , the second queue being configured to receive a copy of the data and to allow the second virtual machine to access the copy of the data , wherein , the second virtual machine is executing in a record/replay mode .

US8954993B2
CLAIM 20
. The datacenter of claim 14 , wherein the first and second VMs (virtual machines) are configured to execute on the same physical machine .
US20090249357A1
CLAIM 4
. The method as recited in claim 3 , wherein update of one of the head pointer by the second virtual machines (second VM, second VMs) causes a second page fault , wherein handling of the second page fault includes updating a first head pointer in the first queue .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080276241A1

Filed: 2008-05-05     Issued: 2008-11-06

Distributed priority queue that maintains item locality

(Original Assignee) Avaya Inc     (Current Assignee) Avaya Inc

Ratan Bajpai, Krishna Kishore Dhara, Venkatesh Krishnaswamy
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (time t) to a datacenter queue at least partially stored at a second server (telephone calls) ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20080276241A1
CLAIM 11
. The method of claim 10 wherein said data items comprise data related to telephone calls (second server) .

US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (time t) sent by the producer worker before storing the first message .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (time t) includes deleting the first message .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (time t) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (time t) to a datacenter queue at least partially stored at a second server (telephone calls) ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080276241A1
CLAIM 11
. The method of claim 10 wherein said data items comprise data related to telephone calls (second server) .

US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (time t) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (time t) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (time t) sent by the producer worker before storing the first message .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (time t) by deleting the first message .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (time t) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (time t) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20080276241A1
CLAIM 12
. The method of claim 11 wherein priority comprises a time t (first message, first criterion) hat a given data item was received at a given node .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090241118A1

Filed: 2008-03-20     Issued: 2009-09-24

System and method for processing interface requests in batch

(Original Assignee) American Express Travel Related Services Co Inc     (Current Assignee) Liberty Peak Ventures LLC

Krishna K. Lingamneni
US8954993B2
CLAIM 1
. A method to locally process queue requests (requesting application) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20090241118A1
CLAIM 1
. A method for facilitating a reply to a real-time request , the method including : managing the number of currently executing batch jobs ;
submitting the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application (queue requests) and stored into a request queue ;
receiving an output of the batch job ;
formatting a reply message corresponding to the request ;
and , storing the reply message in an accessible reply queue .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (requesting application) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module (general purpose) configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20090241118A1
CLAIM 1
. A method for facilitating a reply to a real-time request , the method including : managing the number of currently executing batch jobs ;
submitting the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application (queue requests) and stored into a request queue ;
receiving an output of the batch job ;
formatting a reply message corresponding to the request ;
and , storing the reply message in an accessible reply queue .

US20090241118A1
CLAIM 19
. A computer-readable storage medium containing a set of instructions for a general purpose (processing module) computer configured to : manage the currently executing batch jobs ;
submit the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application and stored into a request queue ;
receive an output of the batch job ;
format a reply message corresponding to the request ;
and , store the output in an accessible reply queue .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module (general purpose) is further configured to build a table of queue usage based on at least one observed datacenter queue request .
US20090241118A1
CLAIM 19
. A computer-readable storage medium containing a set of instructions for a general purpose (processing module) computer configured to : manage the currently executing batch jobs ;
submit the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application and stored into a request queue ;
receive an output of the batch job ;
format a reply message corresponding to the request ;
and , store the output in an accessible reply queue .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module (general purpose) is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20090241118A1
CLAIM 19
. A computer-readable storage medium containing a set of instructions for a general purpose (processing module) computer configured to : manage the currently executing batch jobs ;
submit the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application and stored into a request queue ;
receive an output of the batch job ;
format a reply message corresponding to the request ;
and , store the output in an accessible reply queue .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (requesting application) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20090241118A1
CLAIM 1
. A method for facilitating a reply to a real-time request , the method including : managing the number of currently executing batch jobs ;
submitting the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application (queue requests) and stored into a request queue ;
receiving an output of the batch job ;
formatting a reply message corresponding to the request ;
and , storing the reply message in an accessible reply queue .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080235690A1

Filed: 2008-03-18     Issued: 2008-09-25

Maintaining Processing Order While Permitting Parallelism

(Original Assignee) VMware Inc     (Current Assignee) VMware Inc

Boon Seong Ang, Andrew Lambeth, Jyothir Ramanan
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (virtual machine monitor) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide a requested message upon receiving the message request from the consumer worker .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (virtual machine monitor) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (virtual machine monitor) request .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (virtual machine monitor) request .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide the requested message upon receiving the message request from the consumer worker .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (virtual machine monitor) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (virtual machine monitor) location different from the first ;

detect a consumer worker that is executed on a second VM (virtual machines) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080235690A1
CLAIM 11
. A method for processing packets in a virtual switch connected to a plurality of virtual machines (second VM, second VMs) , the method comprising : taking a lock associated with a first stage ;
performing a task associated with the first stage on a first packet ;
determining if a lock associated with a second stage is available ;
if the lock associated with the second stage is available , taking the lock associated with the second stage , releasing the lock associated with the first stage , and performing a task associated with the second stage on the first packet ;
and if the lock associated with the second stage is not available , storing the packet in a queue associated with the second stage .

US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (virtual machine monitor) .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (virtual machine monitor) request .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (virtual machine monitor) request .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 20
. The datacenter of claim 14 , wherein the first and second VMs (virtual machines) are configured to execute on the same physical machine .
US20080235690A1
CLAIM 11
. A method for processing packets in a virtual switch connected to a plurality of virtual machines (second VM, second VMs) , the method comprising : taking a lock associated with a first stage ;
performing a task associated with the first stage on a first packet ;
determining if a lock associated with a second stage is available ;
if the lock associated with the second stage is available , taking the lock associated with the second stage , releasing the lock associated with the first stage , and performing a task associated with the second stage on the first packet ;
and if the lock associated with the second stage is not available , storing the packet in a queue associated with the second stage .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide the requested message upon receiving the message request from the consumer worker .
US20080235690A1
CLAIM 22
. The method of claim 11 , wherein performing the task associated with the second stage comprises delivering the first packet to a virtual machine monitor (second datacenter, datacenter queue) .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090234908A1

Filed: 2008-03-14     Issued: 2009-09-17

Data transmission queuing using fault prediction

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Marc D. Reyhner, Ian C. Jirka
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server (remote computer) ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20090234908A1
CLAIM 5
. The method of claim 1 , wherein the data is received from a remote computer (second server) server and wherein the data is forwarded over the communication channel to a destination system .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server (remote computer) ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20090234908A1
CLAIM 5
. The method of claim 1 , wherein the data is received from a remote computer (second server) server and wherein the data is forwarded over the communication channel to a destination system .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (queue management) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20090234908A1
CLAIM 19
. A computer system comprising : a data transmission queue including a plurality of virtual queues including a first virtual queue associated with a first fault group and a second virtual queue associated with a second fault group ;
a communication channel communicatively coupled to the data transmission queue ;
and a virtual queue management (second datacenter, second datacenter location) module to evaluate data to be communicated over the communication channel and to control assignment of the evaluated data with respect to at least one of the plurality of virtual queues within the data transmission queue .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP1939743A2

Filed: 2007-11-26     Issued: 2008-07-02

Event correlation

(Original Assignee) SAP SE     (Current Assignee) SAP SE

Franz Weber
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (incoming messages) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (incoming messages) from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request (incoming messages) from the consumer worker .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (incoming messages) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (incoming messages) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (incoming messages) from the consumer worker .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (first event) and sends a message request (incoming messages) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

EP1939743A2
CLAIM 16
A computer program product in accordance with any one of the preceding claims 9 to 15 , to further cause the data processing apparatus to perform operations comprising : associating the first data with a process instance as being data representing a first event (second VM) for a group of one or more events to be processed if certain criteria is met ;
determining whether the criteria is met ;
and processing the first data in response to a determination that the criteria is met .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (incoming messages) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (incoming messages) from the consumer worker .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080077939A1

Filed: 2007-07-31     Issued: 2008-03-27

Solution for modifying a queue manager to support smart aliasing which permits extensible software to execute against queued data without application modifications

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Richard Michael Harran, Stephen James Hobson, Peter Siddall
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server (given operation) ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20080077939A1
CLAIM 11
. A queue manager comprising : a smart alias function configured to associate a queue name with a plurality of different queues , wherein which one of the queues is associated with the queue name for a given operation (second server) is dependent upon programmatically determinable conditions , wherein the queue manager is configured to receives digitally encoded messages , to store the received digitally encoded messages , and to provide the digitally encoded messages to authorized requesting software applications , and wherein the queue manager and the smart alias function comprises a set of programmatic instructions stored in a machine readable medium , wherein said programmatic instructions are readable by a machine , which cause the machine to perform a set of operations for which the associated queue manager or smart alias function are configured .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server (given operation) ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080077939A1
CLAIM 11
. A queue manager comprising : a smart alias function configured to associate a queue name with a plurality of different queues , wherein which one of the queues is associated with the queue name for a given operation (second server) is dependent upon programmatically determinable conditions , wherein the queue manager is configured to receives digitally encoded messages , to store the received digitally encoded messages , and to provide the digitally encoded messages to authorized requesting software applications , and wherein the queue manager and the smart alias function comprises a set of programmatic instructions stored in a machine readable medium , wherein said programmatic instructions are readable by a machine , which cause the machine to perform a set of operations for which the associated queue manager or smart alias function are configured .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (one computer) location different from the first ;

detect a consumer worker that is executed on a second VM (application execution) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080077939A1
CLAIM 10
. The method of claim 1 , wherein said steps of claim 1 are steps performed automatically by at least one machine in accordance with at least one computer (second datacenter) program having a plurality of code sections that are executable by the at least one machine , said at least one computer program being stored in a machine readable medium .

US20080077939A1
CLAIM 16
. The queue manager of claim 15 , further comprising : a queue application configured to execute at least one programmatic action for which received message input is required , wherein for each new received message added to the intake queue , the queue application execution (second VM) s , and wherein after the queue application executes , the new received message is placed in the output queue .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070239838A1

Filed: 2007-04-09     Issued: 2007-10-11

Methods and systems for digital content sharing

(Original Assignee) Nokia Oyj; Twango Inc     (Current Assignee) Nokia Technologies Oy

James Laurel, Michael Laurel, Serena Glover, Don Kim, Philip Carmichael, Randall Kerr
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (more servers) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (second email, first email) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (second email, first email) from the consumer worker to the datacenter queue (more servers) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (more servers) is configured to hide a requested message upon receiving the message request (second email, first email) from the consumer worker .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (more servers) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (second email, first email) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (more servers) request (second email, first email) .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (more servers) request (second email, first email) .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (second email, first email) from the consumer worker to the datacenter queue (more servers) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (more servers) is configured to hide the requested message upon receiving the message request (second email, first email) from the consumer worker .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (more servers) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (second email, first email) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (more servers) .
US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (more servers) request (second email, first email) .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (more servers) request (second email, first email) .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (second email, first email) from the consumer worker to the datacenter queue (more servers) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (more servers) is configured to hide the requested message upon receiving the message request (second email, first email) from the consumer worker .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request, datacenter queue request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request, datacenter queue request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080212602A1

Filed: 2007-03-01     Issued: 2008-09-04

Method, system and program product for optimizing communication and processing functions between disparate applications

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Alphana B. Hobbs, Daniel P. Huskey, Shirish S. Javalkar, Tuan A. Pham, William J. Reilly, Allen J. Scribner, Deirdre A. Wessel
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (second request) sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request (first server) -format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server (second request) .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request (first server) -format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080212602A1
CLAIM 15
. A computer program product for optimizing communication and processing functions between disparate applications , said computer program product comprising : a computer readable medium ;
first program (first criterion) instructions to create , in a contemporary application , a request message having a condensed format for routing from said contemporary application to a legacy application , said request message having said condensed format that provides a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
second program (first criterion) instructions to convert using a routing application said request message having said condensed format into a reformatted request message having an expanded format before routing to said legacy application ;
third program instructions to convert using said routing application a response message having a legacy format received from said legacy application into a reformatted response message having a contemporary format for routing to a messaging application ;
fourth program instructions to queue , using said messaging application , said reformatted response message having said contemporary format in a response group corresponding to a message type , said response group containing other received reformatted response messages having said contemporary format that match said message type before transmitting said response group to said contemporary application ;
and wherein said first , second , third and fourth program instructions are stored on said computer readable medium .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker .
US20080212602A1
CLAIM 15
. A computer program product for optimizing communication and processing functions between disparate applications , said computer program product comprising : a computer readable medium ;
first program (first criterion) instructions to create , in a contemporary application , a request message having a condensed format for routing from said contemporary application to a legacy application , said request message having said condensed format that provides a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
second program (first criterion) instructions to convert using a routing application said request message having said condensed format into a reformatted request message having an expanded format before routing to said legacy application ;
third program instructions to convert using said routing application a response message having a legacy format received from said legacy application into a reformatted response message having a contemporary format for routing to a messaging application ;
fourth program instructions to queue , using said messaging application , said reformatted response message having said contemporary format in a response group corresponding to a message type , said response group containing other received reformatted response messages having said contemporary format that match said message type before transmitting said response group to said contemporary application ;
and wherein said first , second , third and fourth program instructions are stored on said computer readable medium .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server (second request) , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request (first server) -format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080212602A1
CLAIM 15
. A computer program product for optimizing communication and processing functions between disparate applications , said computer program product comprising : a computer readable medium ;
first program (first criterion) instructions to create , in a contemporary application , a request message having a condensed format for routing from said contemporary application to a legacy application , said request message having said condensed format that provides a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
second program (first criterion) instructions to convert using a routing application said request message having said condensed format into a reformatted request message having an expanded format before routing to said legacy application ;
third program instructions to convert using said routing application a response message having a legacy format received from said legacy application into a reformatted response message having a contemporary format for routing to a messaging application ;
fourth program instructions to queue , using said messaging application , said reformatted response message having said contemporary format in a response group corresponding to a message type , said response group containing other received reformatted response messages having said contemporary format that match said message type before transmitting said response group to said contemporary application ;
and wherein said first , second , third and fourth program instructions are stored on said computer readable medium .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20080212602A1
CLAIM 15
. A computer program product for optimizing communication and processing functions between disparate applications , said computer program product comprising : a computer readable medium ;
first program (first criterion) instructions to create , in a contemporary application , a request message having a condensed format for routing from said contemporary application to a legacy application , said request message having said condensed format that provides a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
second program (first criterion) instructions to convert using a routing application said request message having said condensed format into a reformatted request message having an expanded format before routing to said legacy application ;
third program instructions to convert using said routing application a response message having a legacy format received from said legacy application into a reformatted response message having a contemporary format for routing to a messaging application ;
fourth program instructions to queue , using said messaging application , said reformatted response message having said contemporary format in a response group corresponding to a message type , said response group containing other received reformatted response messages having said contemporary format that match said message type before transmitting said response group to said contemporary application ;
and wherein said first , second , third and fourth program instructions are stored on said computer readable medium .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080212602A1
CLAIM 15
. A computer program product for optimizing communication and processing functions between disparate applications , said computer program product comprising : a computer readable medium ;
first program (first criterion) instructions to create , in a contemporary application , a request message having a condensed format for routing from said contemporary application to a legacy application , said request message having said condensed format that provides a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
second program (first criterion) instructions to convert using a routing application said request message having said condensed format into a reformatted request message having an expanded format before routing to said legacy application ;
third program instructions to convert using said routing application a response message having a legacy format received from said legacy application into a reformatted response message having a contemporary format for routing to a messaging application ;
fourth program instructions to queue , using said messaging application , said reformatted response message having said contemporary format in a response group corresponding to a message type , said response group containing other received reformatted response messages having said contemporary format that match said message type before transmitting said response group to said contemporary application ;
and wherein said first , second , third and fourth program instructions are stored on said computer readable medium .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20080212602A1
CLAIM 15
. A computer program product for optimizing communication and processing functions between disparate applications , said computer program product comprising : a computer readable medium ;
first program (first criterion) instructions to create , in a contemporary application , a request message having a condensed format for routing from said contemporary application to a legacy application , said request message having said condensed format that provides a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
second program (first criterion) instructions to convert using a routing application said request message having said condensed format into a reformatted request message having an expanded format before routing to said legacy application ;
third program instructions to convert using said routing application a response message having a legacy format received from said legacy application into a reformatted response message having a contemporary format for routing to a messaging application ;
fourth program instructions to queue , using said messaging application , said reformatted response message having said contemporary format in a response group corresponding to a message type , said response group containing other received reformatted response messages having said contemporary format that match said message type before transmitting said response group to said contemporary application ;
and wherein said first , second , third and fourth program instructions are stored on said computer readable medium .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080148281A1

Filed: 2006-12-14     Issued: 2008-06-19

RDMA (remote direct memory access) data transfer in a virtual environment

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

William R. Magro, Robert J. Woodruff, Jianxin Xiong
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (virtual machine monitor) at least partially stored at a second server (second virtual machine) ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20080148281A1
CLAIM 1
. A method comprising : determining that a message has been placed in a send buffer ;
and transferring the message to an application on a second virtual machine (second server, second VM) by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message .

US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide a requested message upon receiving the message request from the consumer worker .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (virtual machine monitor) at least partially stored at a second server (second virtual machine) ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080148281A1
CLAIM 1
. A method comprising : determining that a message has been placed in a send buffer ;
and transferring the message to an application on a second virtual machine (second server, second VM) by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message .

US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (virtual machine monitor) request .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (virtual machine monitor) request .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide the requested message upon receiving the message request from the consumer worker .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (virtual machine monitor) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (virtual machine monitor) location different from the first ;

detect a consumer worker that is executed on a second VM (second virtual machine) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20080148281A1
CLAIM 1
. A method comprising : determining that a message has been placed in a send buffer ;
and transferring the message to an application on a second virtual machine (second server, second VM) by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message .

US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (virtual machine monitor) .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (virtual machine monitor) request .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (virtual machine monitor) request .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue (virtual machine monitor) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (virtual machine monitor) is configured to hide the requested message upon receiving the message request from the consumer worker .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (second datacenter, datacenter queue) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070165625A1

Filed: 2006-12-01     Issued: 2007-07-19

System and method for exchanging information among exchange applications

(Original Assignee) FireStar Software Inc     (Current Assignee) FireStar Software Inc

Mark Eisner, Gabriel Oancea
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache (read access) at the first server , wherein the queue cache includes one (unique message identifier) of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070165625A1
CLAIM 1
. A method for processing messages in a gateway , comprising : receiving a gateway message , the gateway message including a gateway message header and a payload , the gateway message header including a unique message identifier (queue cache includes one) block , a target block identifying where the gateway message is going , and a history block providing a log of what has happened to the gateway message ;
processing each block in the gateway message header in accordance with a message type , the processing including determining a target application for receiving the payload ;
and providing the payload to the target application , wherein each block includes one or more values .

US20070165625A1
CLAIM 5
. The method according to claim 2 , the method further comprising receiving a request from the target application to provide read access (queue cache) to the attachment stored in a data store in a gateway .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (read access) at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070165625A1
CLAIM 5
. The method according to claim 2 , the method further comprising receiving a request from the target application to provide read access (queue cache) to the attachment stored in a data store in a gateway .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (read access) includes one (unique message identifier) of a copy and a partial copy of the datacenter queue .
US20070165625A1
CLAIM 1
. A method for processing messages in a gateway , comprising : receiving a gateway message , the gateway message including a gateway message header and a payload , the gateway message header including a unique message identifier (queue cache includes one) block , a target block identifying where the gateway message is going , and a history block providing a log of what has happened to the gateway message ;
processing each block in the gateway message header in accordance with a message type , the processing including determining a target application for receiving the payload ;
and providing the payload to the target application , wherein each block includes one or more values .

US20070165625A1
CLAIM 5
. The method according to claim 2 , the method further comprising receiving a request from the target application to provide read access (queue cache) to the attachment stored in a data store in a gateway .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070168301A1

Filed: 2006-12-01     Issued: 2007-07-19

System and method for exchanging information among exchange applications

(Original Assignee) FireStar Software Inc     (Current Assignee) FireStar Software Inc

Mark Eisner, Gabriel Oancea
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (different gateways) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker (including information) at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US20070168301A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (different gateways) sent by the producer worker before storing the first message .
US20070168301A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker and the consumer worker (including information) are co-located on a multi-core device at the first server .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker and the consumer worker (including information) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (different gateways) includes deleting the first message .
US20070168301A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker (including information) to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker (including information) .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (different gateways) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US20070168301A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker and the consumer worker (including information) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker (including information) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (including information) .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (different gateways) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker (including information) that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US20070168301A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (different gateways) sent by the producer worker before storing the first message .
US20070168301A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker (including information) are co-located on a multi-core device at the first datacenter location .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (different gateways) by deleting the first message .
US20070168301A1
CLAIM 4
. The method according to claim 1 , wherein the one or more simple transactions includes : a transmission transaction in which a gateway message is sent from a first gateway to a second gateway , wherein the second gateway processes the message and sends an acknowledgment to the first gateway after processing the message ;
and a request/reply transaction in which a gateway message is sent from the first gateway to the second gateway , wherein the second gateway processes the gateway message and sends a reply message back to the first gateway , wherein the first gateway and the second gateway can be the same gateway or different gateways (first message) .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker (including information) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (including information) .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070204275A1

Filed: 2006-08-28     Issued: 2007-08-30

Method and system for reliable message delivery

(Original Assignee) Rhysome Inc     (Current Assignee) Rhysome Inc

Melanie Alshab, Peter Bales, Robert Covington, Jonathan Theophilus, Lisa Trotter
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (transmitting step) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070204275A1
CLAIM 11
. The method of claim 10 wherein said transmitting step (first message) includes targeting a device based on path information related to the data message .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (transmitting step) sent by the producer worker before storing the first message .
US20070204275A1
CLAIM 11
. The method of claim 10 wherein said transmitting step (first message) includes targeting a device based on path information related to the data message .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (transmitting step) includes deleting the first message .
US20070204275A1
CLAIM 11
. The method of claim 10 wherein said transmitting step (first message) includes targeting a device based on path information related to the data message .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (transmitting step) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070204275A1
CLAIM 11
. The method of claim 10 wherein said transmitting step (first message) includes targeting a device based on path information related to the data message .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (transmitting step) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (one computer) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070204275A1
CLAIM 11
. The method of claim 10 wherein said transmitting step (first message) includes targeting a device based on path information related to the data message .

US20070204275A1
CLAIM 19
. A method for fault tolerant communications of a data message from a source computer to a destination computer where an application generates a data message on the source computer ;
wherein data messages are stored in volatile memory without the need for persistent storage ;
the source and destination computers are a part of a group of computers connected together with a communications system ;
comprising the steps of : sending a data copy of the message by the source computer to at least one computer (second datacenter) ;
each computer that receives the data message forwards a copy of the data message to another computer when a computer receives a copy of the message the receiving computer generates an acknowledgement message which is sent to the computer having sent the message that the acknowledgement message has been received ;
and each computer that receives the acknowledgement message removes the data from its volatile memory .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (transmitting step) sent by the producer worker before storing the first message .
US20070204275A1
CLAIM 11
. The method of claim 10 wherein said transmitting step (first message) includes targeting a device based on path information related to the data message .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (transmitting step) by deleting the first message .
US20070204275A1
CLAIM 11
. The method of claim 10 wherein said transmitting step (first message) includes targeting a device based on path information related to the data message .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070288931A1

Filed: 2006-05-25     Issued: 2007-12-13

Multi processor and multi thread safe message queue with hardware assistance

(Original Assignee) PortalPlayer Inc     (Current Assignee) Nvidia Corp

Gokhan Avkarogullari
US8954993B2
CLAIM 1
. A method to locally process queue requests (exchanging messages) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070288931A1
CLAIM 13
. A method for exchanging messages (queue requests) between a first software component running on a first computerized processor and a second software component running on a second computerized processor , wherein the first computerized processor and the second computerized processor have access to a shared memory , the method comprising : (a) attempting with the first software component to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
(b) determining whether there is space for the message token in a message queue in the shared memory , wherein said determining is triggered by said (a) having occurred and is performed atomically with respect to the software components ;
(c) if said (b) indicates that said space is available , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) incrementing a message counter ;
(2) writing said message token into said message queue at a location designated by a write pointer ;
and (3) changing said write pointer to point to a next location in said message queue ;
(d) attempting with the second software component to load said message token from a message queue read register ;
(e) determining whether said message token is new , thereby indicating whether there is at least one new message in the message queue , and wherein said determining is triggered by said (d) having occurred and is performed atomically with respect to the software components ;
(f) if said (e) indicates that the message is new , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) decrementing said message counter ;
(2) reading said message token from said message queue at a location designated by a read pointer ;
and (3) changing said read pointer to point to a next location in said message queue .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (exchanging messages) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module (turning control) configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070288931A1
CLAIM 5
. The method of claim 4 , wherein said interrupting includes : signaling a read semaphore that is top most in a waiting list ;
removing said read semaphore that is top most from said waiting list ;
and returning control (queue usage detector module) of said second computerized processor to said second software component .

US20070288931A1
CLAIM 13
. A method for exchanging messages (queue requests) between a first software component running on a first computerized processor and a second software component running on a second computerized processor , wherein the first computerized processor and the second computerized processor have access to a shared memory , the method comprising : (a) attempting with the first software component to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
(b) determining whether there is space for the message token in a message queue in the shared memory , wherein said determining is triggered by said (a) having occurred and is performed atomically with respect to the software components ;
(c) if said (b) indicates that said space is available , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) incrementing a message counter ;
(2) writing said message token into said message queue at a location designated by a write pointer ;
and (3) changing said write pointer to point to a next location in said message queue ;
(d) attempting with the second software component to load said message token from a message queue read register ;
(e) determining whether said message token is new , thereby indicating whether there is at least one new message in the message queue , and wherein said determining is triggered by said (d) having occurred and is performed atomically with respect to the software components ;
(f) if said (e) indicates that the message is new , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) decrementing said message counter ;
(2) reading said message token from said message queue at a location designated by a read pointer ;
and (3) changing said read pointer to point to a next location in said message queue .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module (turning control) is further configured to observe the at least one observed datacenter queue request .
US20070288931A1
CLAIM 5
. The method of claim 4 , wherein said interrupting includes : signaling a read semaphore that is top most in a waiting list ;
removing said read semaphore that is top most from said waiting list ;
and returning control (queue usage detector module) of said second computerized processor to said second software component .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (exchanging messages) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (queue management) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070288931A1
CLAIM 13
. A method for exchanging messages (queue requests) between a first software component running on a first computerized processor and a second software component running on a second computerized processor , wherein the first computerized processor and the second computerized processor have access to a shared memory , the method comprising : (a) attempting with the first software component to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
(b) determining whether there is space for the message token in a message queue in the shared memory , wherein said determining is triggered by said (a) having occurred and is performed atomically with respect to the software components ;
(c) if said (b) indicates that said space is available , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) incrementing a message counter ;
(2) writing said message token into said message queue at a location designated by a write pointer ;
and (3) changing said write pointer to point to a next location in said message queue ;
(d) attempting with the second software component to load said message token from a message queue read register ;
(e) determining whether said message token is new , thereby indicating whether there is at least one new message in the message queue , and wherein said determining is triggered by said (d) having occurred and is performed atomically with respect to the software components ;
(f) if said (e) indicates that the message is new , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) decrementing said message counter ;
(2) reading said message token from said message queue at a location designated by a read pointer ;
and (3) changing said read pointer to point to a next location in said message queue .

US20070288931A1
CLAIM 14
. A system for a first software component running on a first computerized processor to write a message to a shared memory that is accessible by a second software component running on a second computerized processor , comprising : load means for the first software component to attempt to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
a message queue management (second datacenter, second datacenter location) unit including : determination means for determining , atomically with respect to the software components , whether there is space for the message token in a message queue in the shared memory ;
and updating means responsive to said determination means for updating said message queue atomically with respect to the software components , wherein said updating means includes : means for incrementing a message counter ;
means for writing said message token into said message queue at a location designated by a write pointer ;
and means for changing said write pointer to point to a next location in said message queue .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070174398A1

Filed: 2006-01-25     Issued: 2007-07-26

Systems and methods for communicating logic in e-mail messages

(Original Assignee) StrongMail Systems Inc     (Current Assignee) Selligent Inc

Frank Addante
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (web service) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (web service) from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request (web service) from the consumer worker .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (web service) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module (processing module) configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US20070174398A1
CLAIM 41
. A computer system for distributing one or more encoded messages to one or more recipients , the computer system comprising : a central processing unit ;
and a memory , coupled to the central processing unit , the memory storing a request processing module (processing module) and a message transfer agent module , wherein the message processing module comprises instructions for : (A) receiving an electronic request using an e-mail protocol , wherein said electronic request includes : instructions for accessing one or more destinations corresponding to said one or more recipients ;
and logic for accessing data relating to recipients in said one or more recipients ;
(B) obtaining said data relating to recipients in said one or more recipients using said logic for accessing said data ;
(C) formatting , for each respective recipient in said one or more recipients , a message body of a message corresponding to said respective recipient using said data relating to said respective recipient obtained in step (B) thereby constructing said one or more encoded messages ;
and (D) distributing said one or more encoded messages to said one or more recipients using said one or more corresponding destinations .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module (processing module) is further configured to build a table of queue usage based on at least one observed datacenter queue request .
US20070174398A1
CLAIM 41
. A computer system for distributing one or more encoded messages to one or more recipients , the computer system comprising : a central processing unit ;
and a memory , coupled to the central processing unit , the memory storing a request processing module (processing module) and a message transfer agent module , wherein the message processing module comprises instructions for : (A) receiving an electronic request using an e-mail protocol , wherein said electronic request includes : instructions for accessing one or more destinations corresponding to said one or more recipients ;
and logic for accessing data relating to recipients in said one or more recipients ;
(B) obtaining said data relating to recipients in said one or more recipients using said logic for accessing said data ;
(C) formatting , for each respective recipient in said one or more recipients , a message body of a message corresponding to said respective recipient using said data relating to said respective recipient obtained in step (B) thereby constructing said one or more encoded messages ;
and (D) distributing said one or more encoded messages to said one or more recipients using said one or more corresponding destinations .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module (processing module) is further configured to : intercept the message request (web service) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US20070174398A1
CLAIM 41
. A computer system for distributing one or more encoded messages to one or more recipients , the computer system comprising : a central processing unit ;
and a memory , coupled to the central processing unit , the memory storing a request processing module (processing module) and a message transfer agent module , wherein the message processing module comprises instructions for : (A) receiving an electronic request using an e-mail protocol , wherein said electronic request includes : instructions for accessing one or more destinations corresponding to said one or more recipients ;
and logic for accessing data relating to recipients in said one or more recipients ;
(B) obtaining said data relating to recipients in said one or more recipients using said logic for accessing said data ;
(C) formatting , for each respective recipient in said one or more recipients , a message body of a message corresponding to said respective recipient using said data relating to said respective recipient obtained in step (B) thereby constructing said one or more encoded messages ;
and (D) distributing said one or more encoded messages to said one or more recipients using said one or more corresponding destinations .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (web service) from the consumer worker .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (web service) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (web service) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (web service) from the consumer worker .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060146991A1

Filed: 2006-01-05     Issued: 2006-07-06

Provisioning and management in a message publish/subscribe system

(Original Assignee) Tervela Inc     (Current Assignee) Tervela Inc

J. Thompson, Kul Singh, Pierre Fraval
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (data message) at a first server (external authentication) sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 40
. A messaging system as in claim 1 , wherein one or more of the provisioning and management systems are integrated with an external authentication (first server) and entitlement system .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (data message) before storing the first message .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (data message) and the consumer worker are co-located on a multi-core device at the first server (external authentication) .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 40
. A messaging system as in claim 1 , wherein one or more of the provisioning and management systems are integrated with an external authentication (first server) and entitlement system .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (data message) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (data message) at a first server (external authentication) , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 40
. A messaging system as in claim 1 , wherein one or more of the provisioning and management systems are integrated with an external authentication (first server) and entitlement system .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (data message) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (data message) that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (data message) before storing the first message .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (data message) and the consumer worker are co-located on a multi-core device at the first datacenter location .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070156834A1

Filed: 2005-12-29     Issued: 2007-07-05

Cursor component for messaging service

(Original Assignee) SAP SE     (Current Assignee) SAP SE

Radoslav Nikolov, Desislav Bantchovski, Stoyan Vellev
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (first message) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel (acknowledging receipt) associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070156834A1
CLAIM 1
. An article of manufacture including program code which , when executed by a machine , causes the machine to implement a messaging service method , the method comprising : maintaining a consumer specific table of references , each of said references pointing to its own respective message in memory , each said reference and respective message pair corresponding to a different message priority level , each said respective message being a first message (first message) within a link list waiting to be acknowledged as having been received by said consumer , said link list linking messages for a plurality of consumers including said consumer , each of said messages having said respective message' ;
s priority level .

US20070156834A1
CLAIM 8
. The article of manufacture of claim 5 wherein said method further comprises flushing said highest priority message from said memory in response to said consumer acknowledging receipt (command channel) of said highest priority message .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (first message) sent by the producer worker before storing the first message .
US20070156834A1
CLAIM 1
. An article of manufacture including program code which , when executed by a machine , causes the machine to implement a messaging service method , the method comprising : maintaining a consumer specific table of references , each of said references pointing to its own respective message in memory , each said reference and respective message pair corresponding to a different message priority level , each said respective message being a first message (first message) within a link list waiting to be acknowledged as having been received by said consumer , said link list linking messages for a plurality of consumers including said consumer , each of said messages having said respective message' ;
s priority level .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (first message) includes deleting the first message .
US20070156834A1
CLAIM 1
. An article of manufacture including program code which , when executed by a machine , causes the machine to implement a messaging service method , the method comprising : maintaining a consumer specific table of references , each of said references pointing to its own respective message in memory , each said reference and respective message pair corresponding to a different message priority level , each said respective message being a first message (first message) within a link list waiting to be acknowledged as having been received by said consumer , said link list linking messages for a plurality of consumers including said consumer , each of said messages having said respective message' ;
s priority level .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (first message) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel (acknowledging receipt) associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070156834A1
CLAIM 1
. An article of manufacture including program code which , when executed by a machine , causes the machine to implement a messaging service method , the method comprising : maintaining a consumer specific table of references , each of said references pointing to its own respective message in memory , each said reference and respective message pair corresponding to a different message priority level , each said respective message being a first message (first message) within a link list waiting to be acknowledged as having been received by said consumer , said link list linking messages for a plurality of consumers including said consumer , each of said messages having said respective message' ;
s priority level .

US20070156834A1
CLAIM 8
. The article of manufacture of claim 5 wherein said method further comprises flushing said highest priority message from said memory in response to said consumer acknowledging receipt (command channel) of said highest priority message .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (first message) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel (acknowledging receipt) associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070156834A1
CLAIM 1
. An article of manufacture including program code which , when executed by a machine , causes the machine to implement a messaging service method , the method comprising : maintaining a consumer specific table of references , each of said references pointing to its own respective message in memory , each said reference and respective message pair corresponding to a different message priority level , each said respective message being a first message (first message) within a link list waiting to be acknowledged as having been received by said consumer , said link list linking messages for a plurality of consumers including said consumer , each of said messages having said respective message' ;
s priority level .

US20070156834A1
CLAIM 8
. The article of manufacture of claim 5 wherein said method further comprises flushing said highest priority message from said memory in response to said consumer acknowledging receipt (command channel) of said highest priority message .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (first message) sent by the producer worker before storing the first message .
US20070156834A1
CLAIM 1
. An article of manufacture including program code which , when executed by a machine , causes the machine to implement a messaging service method , the method comprising : maintaining a consumer specific table of references , each of said references pointing to its own respective message in memory , each said reference and respective message pair corresponding to a different message priority level , each said respective message being a first message (first message) within a link list waiting to be acknowledged as having been received by said consumer , said link list linking messages for a plurality of consumers including said consumer , each of said messages having said respective message' ;
s priority level .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (first message) by deleting the first message .
US20070156834A1
CLAIM 1
. An article of manufacture including program code which , when executed by a machine , causes the machine to implement a messaging service method , the method comprising : maintaining a consumer specific table of references , each of said references pointing to its own respective message in memory , each said reference and respective message pair corresponding to a different message priority level , each said respective message being a first message (first message) within a link list waiting to be acknowledged as having been received by said consumer , said link list linking messages for a plurality of consumers including said consumer , each of said messages having said respective message' ;
s priority level .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060168070A1

Filed: 2005-12-23     Issued: 2006-07-27

Hardware-based messaging appliance

(Original Assignee) Tervela Inc     (Current Assignee) Tervela Inc

J. Thompson, Kul Singh, Pierre Fraval
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (data message) at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (incoming messages) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (data message) before storing the first message .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (data message) and the consumer worker are co-located on a multi-core device at the first server .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (data message) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (incoming messages) from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request (incoming messages) from the consumer worker .
US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (data message) at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (incoming messages) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module (processing module) configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 32
. A hardware-based messaging appliance in a publish/subscribe middleware system , comprising : an interconnect bus ;
a management module having management service and administrative message engines interfacing with each other , the management module being configured to handle configuration and monitoring functions ;
a message processing unit having a message routing engine and a media switch fabric with a channel engine interfacing between them , the message processing unit being configured to handle message routing functions ;
one or more physical interface cards (PICs) for handling messages received or routed by the hardware messaging appliance and destined to or leaving the management module and the message processing unit ;
a service module including a time source , wherein the management module , the message processing module (processing module) , the one or more PICs and the service module , are interconnected via the interconnect bus .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module (processing module) is further configured to build a table of queue usage based on at least one observed datacenter queue request .
US20060168070A1
CLAIM 32
. A hardware-based messaging appliance in a publish/subscribe middleware system , comprising : an interconnect bus ;
a management module having management service and administrative message engines interfacing with each other , the management module being configured to handle configuration and monitoring functions ;
a message processing unit having a message routing engine and a media switch fabric with a channel engine interfacing between them , the message processing unit being configured to handle message routing functions ;
one or more physical interface cards (PICs) for handling messages received or routed by the hardware messaging appliance and destined to or leaving the management module and the message processing unit ;
a service module including a time source , wherein the management module , the message processing module (processing module) , the one or more PICs and the service module , are interconnected via the interconnect bus .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (data message) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module (processing module) is further configured to : intercept the message request (incoming messages) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20060168070A1
CLAIM 32
. A hardware-based messaging appliance in a publish/subscribe middleware system , comprising : an interconnect bus ;
a management module having management service and administrative message engines interfacing with each other , the management module being configured to handle configuration and monitoring functions ;
a message processing unit having a message routing engine and a media switch fabric with a channel engine interfacing between them , the message processing unit being configured to handle message routing functions ;
one or more physical interface cards (PICs) for handling messages received or routed by the hardware messaging appliance and destined to or leaving the management module and the message processing unit ;
a service module including a time source , wherein the management module , the message processing module (processing module) , the one or more PICs and the service module , are interconnected via the interconnect bus .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (incoming messages) from the consumer worker .
US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (data message) that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (configuration parameters) and sends a message request (incoming messages) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20060168070A1
CLAIM 10
. A hardware-based messaging appliance as in claim 9 , wherein the logical configuration paths are used for configuration information , and wherein the administrative messages contain such configuration information , including one or more of Syslog configuration parameters (second VM) , network time protocol (NTP) configuration parameters , domain name server (DNS) information , remote access policy , authentication methods , publish/subscribe entitlements and message routing information .

US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (data message) before storing the first message .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (data message) and the consumer worker are co-located on a multi-core device at the first datacenter location .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (incoming messages) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (incoming messages) from the consumer worker .
US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070094664A1

Filed: 2005-10-21     Issued: 2007-04-26

Programmable priority for concurrent multi-threaded processors

(Original Assignee) Broadcom Corp     (Current Assignee) Avago Technologies General IP Singapore Pte Ltd

Kimming So, BaoBinh Truong, Yang Lu, Hon-Chong Ho, Li-Hung Chang, Chia-Cheng Choung, Jason Leonard
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (second request) sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache (cache line) at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070094664A1
CLAIM 9
. The method of claim 1 wherein prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : receiving , at a shared hardware resource , a first request from the first thread processor and a second request (first server) from the second processor ;
accessing the priority information in the control register ;
and providing access to the shared hardware resource to the first thread processor , based on the priority information .

US20070094664A1
CLAIM 11
. The method of claim 1 prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : restricting the second processor to re-fill a cache line (queue cache) only in an assigned portion of a cache during the second process .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server (second request) .
US20070094664A1
CLAIM 9
. The method of claim 1 wherein prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : receiving , at a shared hardware resource , a first request from the first thread processor and a second request (first server) from the second processor ;
accessing the priority information in the control register ;
and providing access to the shared hardware resource to the first thread processor , based on the priority information .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server (second request) , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070094664A1
CLAIM 9
. The method of claim 1 wherein prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : receiving , at a shared hardware resource , a first request from the first thread processor and a second request (first server) from the second processor ;
accessing the priority information in the control register ;
and providing access to the shared hardware resource to the first thread processor , based on the priority information .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (cache line) at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070094664A1
CLAIM 11
. The method of claim 1 prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : restricting the second processor to re-fill a cache line (queue cache) only in an assigned portion of a cache during the second process .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (cache line) includes one of a copy and a partial copy of the datacenter queue .
US20070094664A1
CLAIM 11
. The method of claim 1 prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : restricting the second processor to re-fill a cache line (queue cache) only in an assigned portion of a cache during the second process .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060031568A1

Filed: 2005-10-12     Issued: 2006-02-09

Adaptive flow control protocol

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Vadim Eydelman, Khawar Zuberi
US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (buffering data) .
US20060031568A1
CLAIM 1
. A system for transferring data to a receiving application in a computer environment , comprising : a sending application configured to provide data to the receiving application ;
one or more transmission buffers for buffering data (datacenter queue request) provided by the sending application ;
one or more message buffers to hold messages sent to and received from the receiving application ;
a transport provider configured to : transfer data to the receiving application in blocks of data exceeding a threshold size using direct memory access read operations if the receiving application posts a receive buffer that exceeds the threshold size when posting a send for a pre-selected number of initial data blocks ;
if the receive buffer is posted at a time prior to a time at which the send was posted , send data and a Remote Direct Memory Access (RDMA) receive advertisement in a message if a send buffer posted by the receiving application is of a size below the threshold size and data or RDMA Read information has not been received ;
if the send is posted prior to the receive buffer : copy data to a send buffer having sufficient space to include a receive advertisement in a message header when a small send occurs ;
within a specified time limit , insert the receive advertisement in the message header if the receiving application posts a receive buffer exceeding the threshold size ;
and send the message .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (buffering data) .
US20060031568A1
CLAIM 1
. A system for transferring data to a receiving application in a computer environment , comprising : a sending application configured to provide data to the receiving application ;
one or more transmission buffers for buffering data (datacenter queue request) provided by the sending application ;
one or more message buffers to hold messages sent to and received from the receiving application ;
a transport provider configured to : transfer data to the receiving application in blocks of data exceeding a threshold size using direct memory access read operations if the receiving application posts a receive buffer that exceeds the threshold size when posting a send for a pre-selected number of initial data blocks ;
if the receive buffer is posted at a time prior to a time at which the send was posted , send data and a Remote Direct Memory Access (RDMA) receive advertisement in a message if a send buffer posted by the receiving application is of a size below the threshold size and data or RDMA Read information has not been received ;
if the send is posted prior to the receive buffer : copy data to a send buffer having sufficient space to include a receive advertisement in a message header when a small send occurs ;
within a specified time limit , insert the receive advertisement in the message header if the receiving application posts a receive buffer exceeding the threshold size ;
and send the message .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location (data blocks) different from the first ;

detect a consumer worker that is executed on a second VM (specified time limit) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20060031568A1
CLAIM 1
. A system for transferring data to a receiving application in a computer environment , comprising : a sending application configured to provide data to the receiving application ;
one or more transmission buffers for buffering data provided by the sending application ;
one or more message buffers to hold messages sent to and received from the receiving application ;
a transport provider configured to : transfer data to the receiving application in blocks of data exceeding a threshold size using direct memory access read operations if the receiving application posts a receive buffer that exceeds the threshold size when posting a send for a pre-selected number of initial data blocks (second datacenter location) ;
if the receive buffer is posted at a time prior to a time at which the send was posted , send data and a Remote Direct Memory Access (RDMA) receive advertisement in a message if a send buffer posted by the receiving application is of a size below the threshold size and data or RDMA Read information has not been received ;
if the send is posted prior to the receive buffer : copy data to a send buffer having sufficient space to include a receive advertisement in a message header when a small send occurs ;
within a specified time limit (second VM) , insert the receive advertisement in the message header if the receiving application posts a receive buffer exceeding the threshold size ;
and send the message .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (buffering data) .
US20060031568A1
CLAIM 1
. A system for transferring data to a receiving application in a computer environment , comprising : a sending application configured to provide data to the receiving application ;
one or more transmission buffers for buffering data (datacenter queue request) provided by the sending application ;
one or more message buffers to hold messages sent to and received from the receiving application ;
a transport provider configured to : transfer data to the receiving application in blocks of data exceeding a threshold size using direct memory access read operations if the receiving application posts a receive buffer that exceeds the threshold size when posting a send for a pre-selected number of initial data blocks ;
if the receive buffer is posted at a time prior to a time at which the send was posted , send data and a Remote Direct Memory Access (RDMA) receive advertisement in a message if a send buffer posted by the receiving application is of a size below the threshold size and data or RDMA Read information has not been received ;
if the send is posted prior to the receive buffer : copy data to a send buffer having sufficient space to include a receive advertisement in a message header when a small send occurs ;
within a specified time limit , insert the receive advertisement in the message header if the receiving application posts a receive buffer exceeding the threshold size ;
and send the message .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (buffering data) .
US20060031568A1
CLAIM 1
. A system for transferring data to a receiving application in a computer environment , comprising : a sending application configured to provide data to the receiving application ;
one or more transmission buffers for buffering data (datacenter queue request) provided by the sending application ;
one or more message buffers to hold messages sent to and received from the receiving application ;
a transport provider configured to : transfer data to the receiving application in blocks of data exceeding a threshold size using direct memory access read operations if the receiving application posts a receive buffer that exceeds the threshold size when posting a send for a pre-selected number of initial data blocks ;
if the receive buffer is posted at a time prior to a time at which the send was posted , send data and a Remote Direct Memory Access (RDMA) receive advertisement in a message if a send buffer posted by the receiving application is of a size below the threshold size and data or RDMA Read information has not been received ;
if the send is posted prior to the receive buffer : copy data to a send buffer having sufficient space to include a receive advertisement in a message header when a small send occurs ;
within a specified time limit , insert the receive advertisement in the message header if the receiving application posts a receive buffer exceeding the threshold size ;
and send the message .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070168567A1

Filed: 2005-08-31     Issued: 2007-07-19

System and method for file based I/O directly between an application instance and an I/O adapter

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

William Boyd, John Hufferd, Agustin Mena, Renato Recio, Madeline Vega
US8954993B2
CLAIM 1
. A method to locally process queue requests (system memory, I/O request) from co-located workers in a datacenter , the method comprising : detecting a producer worker (storage location) at a first server sending a first message to a datacenter queue (system memory, I/O request) at least partially stored at a second server ;

storing the first message in a queue cache (system memory, I/O request) at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker (storage location) at the first server sending a message request (start address) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (storage location) before storing the first message .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (storage location) and the consumer worker (storage location) are co-located on a multi-core device at the first server .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (storage location) and the consumer worker (storage location) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (start address) from the consumer worker (storage location) to the datacenter queue (system memory, I/O request) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (system memory, I/O request) is configured to hide a requested message upon receiving the message request (start address) from the consumer worker (storage location) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (system memory, I/O request) from co-located workers in a datacenter , the VMM comprising : a queue usage (system memory, I/O request) detector module configured to : detect a producer worker (storage location) at a first server , wherein the producer worker sends a first message to a datacenter queue (system memory, I/O request) at least partially stored at a second server ;

and detect a consumer worker (storage location) at the first server , wherein the consumer worker sends a message request (start address) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage (system memory, I/O request) based on at least one observed datacenter queue (system memory, I/O request) request (I/O operation) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation (datacenter queue request) on a storage location in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage (system memory, I/O request) detector module is further configured to observe the at least one observed datacenter queue (system memory, I/O request) request (I/O operation) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation (datacenter queue request) on a storage location in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (storage location) and the consumer worker (storage location) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (start address) from the consumer worker (storage location) to the datacenter queue (system memory, I/O request) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (system memory, I/O request) is configured to hide the requested message upon receiving the message request (start address) from the consumer worker (storage location) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (system memory, I/O request) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (storage location) that is executed on a first VM and sends a first message to a datacenter queue (system memory, I/O request) at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (system memory, I/O request) at a second datacenter location different from the first ;

detect a consumer worker (storage location) that is executed on a second VM and sends a message request (start address) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (storage location) before storing the first message .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (system memory, I/O request) includes one of a copy and a partial copy of the datacenter queue (system memory, I/O request) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage (system memory, I/O request) based on at least one observed datacenter queue (system memory, I/O request) request (I/O operation) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation (datacenter queue request) on a storage location in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (system memory, I/O request) request (I/O operation) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation (datacenter queue request) on a storage location in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (storage location) and the consumer worker (storage location) are co-located on a multi-core device at the first datacenter location .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (start address) from the consumer worker (storage location) to the datacenter queue (system memory, I/O request) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (system memory, I/O request) is configured to hide the requested message upon receiving the message request (start address) from the consumer worker (storage location) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue cache includes one, queue requests, datacenter queue, queue usage) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070005572A1

Filed: 2005-06-29     Issued: 2007-01-04

Architecture and system for host management

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

Travis Schluessler, Priya Rajagopal, Ray Steinberger, Tisson Mathew, Arun Preetham, Ravi Sahita, David Durham, Karanvir Grewal
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (first message) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (message request) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel (second buffer) associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer (command channel) for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (first message) sent by the producer worker before storing the first message .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message (first message) requesting the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (first message) includes deleting the first message .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message (first message) requesting the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (message request) from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request (message request) from the consumer worker .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (first message) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel (second buffer) associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer (command channel) for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (message request) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (message request) from the consumer worker .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (first message) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (message request) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel (second buffer) associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer (command channel) for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (first message) sent by the producer worker before storing the first message .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message (first message) requesting the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (first message) by deleting the first message .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message (first message) requesting the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (message request) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (message request) from the consumer worker .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060230209A1

Filed: 2005-04-07     Issued: 2006-10-12

Event queue structure and method

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Thomas Gregg, Richard Arndt, Bruce Beukema, David Craddock, Ronald Fuhs, Steven Rogers, Donald Schmidt, Bruce Walk
US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (event handler) .
US20060230209A1
CLAIM 6
. The information processing system as claimed in claim 4 , further comprising a hypervisor operable to monitor a plurality of global events , at least one of said event queues being a global event queue dedicated to recording said global events , said hypervisor being including a global event handler (datacenter queue request) operable to monitor entries of said global event queue to handle said global events .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (event handler) .
US20060230209A1
CLAIM 6
. The information processing system as claimed in claim 4 , further comprising a hypervisor operable to monitor a plurality of global events , at least one of said event queues being a global event queue dedicated to recording said global events , said hypervisor being including a global event handler (datacenter queue request) operable to monitor entries of said global event queue to handle said global events .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (first event) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20060230209A1
CLAIM 16
. An information processing system , comprising : a plurality of system resources ;
a first event (second VM) queue ;
a second event queue ;
an event recording mechanism operable in a first mode to make entries regarding first types of events in said first event queue , and being operable in a second mode to make entries regarding said first types of events in a second event queue , said second event queue preserving a time order in which said events are recorded as occurring .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (event handler) .
US20060230209A1
CLAIM 6
. The information processing system as claimed in claim 4 , further comprising a hypervisor operable to monitor a plurality of global events , at least one of said event queues being a global event queue dedicated to recording said global events , said hypervisor being including a global event handler (datacenter queue request) operable to monitor entries of said global event queue to handle said global events .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (event handler) .
US20060230209A1
CLAIM 6
. The information processing system as claimed in claim 4 , further comprising a hypervisor operable to monitor a plurality of global events , at least one of said event queues being a global event queue dedicated to recording said global events , said hypervisor being including a global event handler (datacenter queue request) operable to monitor entries of said global event queue to handle said global events .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060184948A1

Filed: 2005-02-17     Issued: 2006-08-17

System, method and medium for providing asynchronous input and output with less system calls to and from an operating system

(Original Assignee) Red Hat Inc     (Current Assignee) Red Hat Inc

Alan Cox
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (one processor) before storing the first message .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (one processor) and the consumer worker are co-located on a multi-core device at the first server .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (one processor) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (one processor) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (one processor) that is executed on a first VM (operating system kernel) and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20060184948A1
CLAIM 1
. A method for reducing the number of system calls from an application program to an operating system kernel (first VM) , comprising the steps of : creating a list of requests issued by an application program ;
associating an indicia with the list indicating whether the list contains a request ;
and adding a new application program request to the list when the indicia indicates that the list includes a request .

US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (one processor) before storing the first message .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (one processor) and the consumer worker are co-located on a multi-core device at the first datacenter location .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20050125804A1

Filed: 2005-01-06     Issued: 2005-06-09

Queued component interface passing for results outflow from queued method invocations

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Richard Dievendorff, Patrick Helland, Gagan Chopra, Mohsen Al-Ghosein
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (first message) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20050125804A1
CLAIM 17
. A method of yielding results from processing work of a first queued component to a second queued component , where the work of the first queued component is initiated by method invocations delivered via a first message (first message) queue , and the second queued component is dispatched method invocations delivered into a second message queue , the method comprising : responsive to a client program issuing a first set of method invocations for the first queued component , marshaling data for the method invocations of the first set into a message to be enqueued into the first message queue ;
and when marshaling an interface pointer reference to the second queued component in any of the method invocations issued by the client program for the first queued component , incorporating interface passing information in the data marshaled into the message , the interface passing information designating to enqueue any method invocation by the first queued component on an interface of the second queued component referenced by the interface pointer reference into the second message queue .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (first message) sent by the producer worker before storing the first message .
US20050125804A1
CLAIM 17
. A method of yielding results from processing work of a first queued component to a second queued component , where the work of the first queued component is initiated by method invocations delivered via a first message (first message) queue , and the second queued component is dispatched method invocations delivered into a second message queue , the method comprising : responsive to a client program issuing a first set of method invocations for the first queued component , marshaling data for the method invocations of the first set into a message to be enqueued into the first message queue ;
and when marshaling an interface pointer reference to the second queued component in any of the method invocations issued by the client program for the first queued component , incorporating interface passing information in the data marshaled into the message , the interface passing information designating to enqueue any method invocation by the first queued component on an interface of the second queued component referenced by the interface pointer reference into the second message queue .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (first message) includes deleting the first message .
US20050125804A1
CLAIM 17
. A method of yielding results from processing work of a first queued component to a second queued component , where the work of the first queued component is initiated by method invocations delivered via a first message (first message) queue , and the second queued component is dispatched method invocations delivered into a second message queue , the method comprising : responsive to a client program issuing a first set of method invocations for the first queued component , marshaling data for the method invocations of the first set into a message to be enqueued into the first message queue ;
and when marshaling an interface pointer reference to the second queued component in any of the method invocations issued by the client program for the first queued component , incorporating interface passing information in the data marshaled into the message , the interface passing information designating to enqueue any method invocation by the first queued component on an interface of the second queued component referenced by the interface pointer reference into the second message queue .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (first message) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20050125804A1
CLAIM 17
. A method of yielding results from processing work of a first queued component to a second queued component , where the work of the first queued component is initiated by method invocations delivered via a first message (first message) queue , and the second queued component is dispatched method invocations delivered into a second message queue , the method comprising : responsive to a client program issuing a first set of method invocations for the first queued component , marshaling data for the method invocations of the first set into a message to be enqueued into the first message queue ;
and when marshaling an interface pointer reference to the second queued component in any of the method invocations issued by the client program for the first queued component , incorporating interface passing information in the data marshaled into the message , the interface passing information designating to enqueue any method invocation by the first queued component on an interface of the second queued component referenced by the interface pointer reference into the second message queue .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (first message) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (one computer) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20050125804A1
CLAIM 17
. A method of yielding results from processing work of a first queued component to a second queued component , where the work of the first queued component is initiated by method invocations delivered via a first message (first message) queue , and the second queued component is dispatched method invocations delivered into a second message queue , the method comprising : responsive to a client program issuing a first set of method invocations for the first queued component , marshaling data for the method invocations of the first set into a message to be enqueued into the first message queue ;
and when marshaling an interface pointer reference to the second queued component in any of the method invocations issued by the client program for the first queued component , incorporating interface passing information in the data marshaled into the message , the interface passing information designating to enqueue any method invocation by the first queued component on an interface of the second queued component referenced by the interface pointer reference into the second message queue .

US20050125804A1
CLAIM 20
. At least one computer (second datacenter) -readable storage medium having stored thereon a software program executable on a computer to perform a method of yielding results from processing work of a first queued component to a second queued component , where the work of the first queued component is initiated by method invocations delivered via a first message queue , and the second queued component is dispatched method invocations delivered into a second message queue , the method comprising : responsive to a client program issuing a first set of method invocations for the first queued component , marshaling data for the method invocations of the first set into a message to be enqueued into the first message queue ;
and when marshaling an interface pointer reference to the second queued component in any of the method invocations issued by the client program for the first queued component , incorporating interface passing information in the data marshaled into the message , the interface passing information designating to enqueue any method invocation by the first queued component on an interface of the second queued component referenced by the interface pointer reference into the second message queue .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (first message) sent by the producer worker before storing the first message .
US20050125804A1
CLAIM 17
. A method of yielding results from processing work of a first queued component to a second queued component , where the work of the first queued component is initiated by method invocations delivered via a first message (first message) queue , and the second queued component is dispatched method invocations delivered into a second message queue , the method comprising : responsive to a client program issuing a first set of method invocations for the first queued component , marshaling data for the method invocations of the first set into a message to be enqueued into the first message queue ;
and when marshaling an interface pointer reference to the second queued component in any of the method invocations issued by the client program for the first queued component , incorporating interface passing information in the data marshaled into the message , the interface passing information designating to enqueue any method invocation by the first queued component on an interface of the second queued component referenced by the interface pointer reference into the second message queue .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (first message) by deleting the first message .
US20050125804A1
CLAIM 17
. A method of yielding results from processing work of a first queued component to a second queued component , where the work of the first queued component is initiated by method invocations delivered via a first message (first message) queue , and the second queued component is dispatched method invocations delivered into a second message queue , the method comprising : responsive to a client program issuing a first set of method invocations for the first queued component , marshaling data for the method invocations of the first set into a message to be enqueued into the first message queue ;
and when marshaling an interface pointer reference to the second queued component in any of the method invocations issued by the client program for the first queued component , incorporating interface passing information in the data marshaled into the message , the interface passing information designating to enqueue any method invocation by the first queued component on an interface of the second queued component referenced by the interface pointer reference into the second message queue .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20050071316A1

Filed: 2004-11-18     Issued: 2005-03-31

Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Ilan Caron
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker (one location) at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker and the consumer worker (one location) are co-located on a multi-core device at the first server .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker and the consumer worker (one location) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker (one location) to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker (one location) .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker (one location) at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker and the consumer worker (one location) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker (one location) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (one location) .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter (readable media) location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker (one location) that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20050071316A1
CLAIM 1
. One or more computer-readable media (first datacenter) having stored thereon a data structure , comprising : a) a first data field containing at least one data item ;
b) a second data field containing data representing a location ;
c) a third data field containing data representing a count of the at least one data item ;
d) a fourth data field containing data representing at least one first instruction to manipulate the at least one data item ;
e) a fifth data field containing data representing at least one second instruction to serialize the data structure ;
and f) a sixth data field containing data representing at least one third instruction to deserialize the data structure .

US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker (one location) are co-located on a multi-core device at the first datacenter (readable media) location .
US20050071316A1
CLAIM 1
. One or more computer-readable media (first datacenter) having stored thereon a data structure , comprising : a) a first data field containing at least one data item ;
b) a second data field containing data representing a location ;
c) a third data field containing data representing a count of the at least one data item ;
d) a fourth data field containing data representing at least one first instruction to manipulate the at least one data item ;
e) a fifth data field containing data representing at least one second instruction to serialize the data structure ;
and f) a sixth data field containing data representing at least one third instruction to deserialize the data structure .

US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker (one location) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (one location) .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US7257811B2

Filed: 2004-05-11     Issued: 2007-08-14

System, method and program to migrate a virtual machine

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Jennifer A. Hunt, Steven Shultz
US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US7257811B2
CLAIM 3
. A computer program product for migrating a first virtual machine and a communication queue from a first real LPAR to a second real LPAR , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said prow-am product comprising : a computer readable media ;
first program (first criterion) instructions for execution within said first LPAR to stop said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
second program (first criterion) instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate said operating system , said application and said communication queue to said second LPAR , and third program instructions for execution within said second LPAR to write said operating system and application into a second private memory in said second LPAR , and fourth program instructions for execution within said second LPAR to write said communication queue into a second shared memory in said second LPAR ;
and fifth program instructions to allocate said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and grant said migrated virtual machine access to said communication queue in said second shared memory ;
sixth program instructions for execution within said second virtual machine to supply a work item to said communication queue before the first program instructions stop said first virtual machine and said second virtual machine ;
seventh program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate an operating system and an application of said second virtual machine to said second LPAR , and eighth program instructions for execution within said second LPAR to write said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and ninth program instructions to allocate said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and grant said other migrated virtual machine access to said communication queue in said second shared memory ;
and wherein said first , second , third , fourth , fifth , sixth , seventh , eighth and ninth program instructions are stored on said media in functional form .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker .
US7257811B2
CLAIM 3
. A computer program product for migrating a first virtual machine and a communication queue from a first real LPAR to a second real LPAR , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said prow-am product comprising : a computer readable media ;
first program (first criterion) instructions for execution within said first LPAR to stop said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
second program (first criterion) instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate said operating system , said application and said communication queue to said second LPAR , and third program instructions for execution within said second LPAR to write said operating system and application into a second private memory in said second LPAR , and fourth program instructions for execution within said second LPAR to write said communication queue into a second shared memory in said second LPAR ;
and fifth program instructions to allocate said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and grant said migrated virtual machine access to said communication queue in said second shared memory ;
sixth program instructions for execution within said second virtual machine to supply a work item to said communication queue before the first program instructions stop said first virtual machine and said second virtual machine ;
seventh program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate an operating system and an application of said second virtual machine to said second LPAR , and eighth program instructions for execution within said second LPAR to write said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and ninth program instructions to allocate said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and grant said other migrated virtual machine access to said communication queue in said second shared memory ;
and wherein said first , second , third , fourth , fifth , sixth , seventh , eighth and ninth program instructions are stored on said media in functional form .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US7257811B2
CLAIM 3
. A computer program product for migrating a first virtual machine and a communication queue from a first real LPAR to a second real LPAR , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said prow-am product comprising : a computer readable media ;
first program (first criterion) instructions for execution within said first LPAR to stop said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
second program (first criterion) instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate said operating system , said application and said communication queue to said second LPAR , and third program instructions for execution within said second LPAR to write said operating system and application into a second private memory in said second LPAR , and fourth program instructions for execution within said second LPAR to write said communication queue into a second shared memory in said second LPAR ;
and fifth program instructions to allocate said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and grant said migrated virtual machine access to said communication queue in said second shared memory ;
sixth program instructions for execution within said second virtual machine to supply a work item to said communication queue before the first program instructions stop said first virtual machine and said second virtual machine ;
seventh program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate an operating system and an application of said second virtual machine to said second LPAR , and eighth program instructions for execution within said second LPAR to write said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and ninth program instructions to allocate said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and grant said other migrated virtual machine access to said communication queue in said second shared memory ;
and wherein said first , second , third , fourth , fifth , sixth , seventh , eighth and ninth program instructions are stored on said media in functional form .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US7257811B2
CLAIM 3
. A computer program product for migrating a first virtual machine and a communication queue from a first real LPAR to a second real LPAR , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said prow-am product comprising : a computer readable media ;
first program (first criterion) instructions for execution within said first LPAR to stop said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
second program (first criterion) instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate said operating system , said application and said communication queue to said second LPAR , and third program instructions for execution within said second LPAR to write said operating system and application into a second private memory in said second LPAR , and fourth program instructions for execution within said second LPAR to write said communication queue into a second shared memory in said second LPAR ;
and fifth program instructions to allocate said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and grant said migrated virtual machine access to said communication queue in said second shared memory ;
sixth program instructions for execution within said second virtual machine to supply a work item to said communication queue before the first program instructions stop said first virtual machine and said second virtual machine ;
seventh program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate an operating system and an application of said second virtual machine to said second LPAR , and eighth program instructions for execution within said second LPAR to write said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and ninth program instructions to allocate said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and grant said other migrated virtual machine access to said communication queue in said second shared memory ;
and wherein said first , second , third , fourth , fifth , sixth , seventh , eighth and ninth program instructions are stored on said media in functional form .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter (readable media) location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (virtual machines) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US7257811B2
CLAIM 1
. A method for migrating a first virtual machine and a communication queue from a first logical partition (“LPAR”) to a second logical partition in a same real computer , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said method comprising the steps of : stopping said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
while said first and second virtual machines (second VM, second VMs) are stopped , said first LPAR communicating said operating system , said application and said communication queue to said second LPAR , and said second LPAR writing said operating system and application into a second private memory in said second LPAR , and said second LPAR writing said communication queue into a second shared memory in said second LPAR ;
allocating said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and granting said migrated virtual machine access to said communication queue in said second shared memory ;
said second virtual machine supplying a work item to said communication queue before the step of stopping said first virtual machine and said second virtual machine ;
while said first and second virtual machines are stopped , said first LPAR communicating an operating system and an application of said second virtual machine to said second LPAR , and said second LPAR writing said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and allocating said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and granting said other migrated virtual machine access to said communication queue in said second shared memory .

US7257811B2
CLAIM 3
. A computer program product for migrating a first virtual machine and a communication queue from a first real LPAR to a second real LPAR , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said prow-am product comprising : a computer readable media (first datacenter) ;
first program instructions for execution within said first LPAR to stop said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
second program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate said operating system , said application and said communication queue to said second LPAR , and third program instructions for execution within said second LPAR to write said operating system and application into a second private memory in said second LPAR , and fourth program instructions for execution within said second LPAR to write said communication queue into a second shared memory in said second LPAR ;
and fifth program instructions to allocate said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and grant said migrated virtual machine access to said communication queue in said second shared memory ;
sixth program instructions for execution within said second virtual machine to supply a work item to said communication queue before the first program instructions stop said first virtual machine and said second virtual machine ;
seventh program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate an operating system and an application of said second virtual machine to said second LPAR , and eighth program instructions for execution within said second LPAR to write said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and ninth program instructions to allocate said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and grant said other migrated virtual machine access to said communication queue in said second shared memory ;
and wherein said first , second , third , fourth , fifth , sixth , seventh , eighth and ninth program instructions are stored on said media in functional form .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter (readable media) location .
US7257811B2
CLAIM 3
. A computer program product for migrating a first virtual machine and a communication queue from a first real LPAR to a second real LPAR , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said prow-am product comprising : a computer readable media (first datacenter) ;
first program instructions for execution within said first LPAR to stop said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
second program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate said operating system , said application and said communication queue to said second LPAR , and third program instructions for execution within said second LPAR to write said operating system and application into a second private memory in said second LPAR , and fourth program instructions for execution within said second LPAR to write said communication queue into a second shared memory in said second LPAR ;
and fifth program instructions to allocate said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and grant said migrated virtual machine access to said communication queue in said second shared memory ;
sixth program instructions for execution within said second virtual machine to supply a work item to said communication queue before the first program instructions stop said first virtual machine and said second virtual machine ;
seventh program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate an operating system and an application of said second virtual machine to said second LPAR , and eighth program instructions for execution within said second LPAR to write said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and ninth program instructions to allocate said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and grant said other migrated virtual machine access to said communication queue in said second shared memory ;
and wherein said first , second , third , fourth , fifth , sixth , seventh , eighth and ninth program instructions are stored on said media in functional form .

US8954993B2
CLAIM 20
. The datacenter of claim 14 , wherein the first and second VMs (virtual machines) are configured to execute on the same physical machine .
US7257811B2
CLAIM 1
. A method for migrating a first virtual machine and a communication queue from a first logical partition (“LPAR”) to a second logical partition in a same real computer , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said method comprising the steps of : stopping said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
while said first and second virtual machines (second VM, second VMs) are stopped , said first LPAR communicating said operating system , said application and said communication queue to said second LPAR , and said second LPAR writing said operating system and application into a second private memory in said second LPAR , and said second LPAR writing said communication queue into a second shared memory in said second LPAR ;
allocating said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and granting said migrated virtual machine access to said communication queue in said second shared memory ;
said second virtual machine supplying a work item to said communication queue before the step of stopping said first virtual machine and said second virtual machine ;
while said first and second virtual machines are stopped , said first LPAR communicating an operating system and an application of said second virtual machine to said second LPAR , and said second LPAR writing said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and allocating said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and granting said other migrated virtual machine access to said communication queue in said second shared memory .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US7257811B2
CLAIM 3
. A computer program product for migrating a first virtual machine and a communication queue from a first real LPAR to a second real LPAR , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said prow-am product comprising : a computer readable media ;
first program (first criterion) instructions for execution within said first LPAR to stop said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
second program (first criterion) instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate said operating system , said application and said communication queue to said second LPAR , and third program instructions for execution within said second LPAR to write said operating system and application into a second private memory in said second LPAR , and fourth program instructions for execution within said second LPAR to write said communication queue into a second shared memory in said second LPAR ;
and fifth program instructions to allocate said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and grant said migrated virtual machine access to said communication queue in said second shared memory ;
sixth program instructions for execution within said second virtual machine to supply a work item to said communication queue before the first program instructions stop said first virtual machine and said second virtual machine ;
seventh program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate an operating system and an application of said second virtual machine to said second LPAR , and eighth program instructions for execution within said second LPAR to write said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and ninth program instructions to allocate said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and grant said other migrated virtual machine access to said communication queue in said second shared memory ;
and wherein said first , second , third , fourth , fifth , sixth , seventh , eighth and ninth program instructions are stored on said media in functional form .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US7257811B2
CLAIM 3
. A computer program product for migrating a first virtual machine and a communication queue from a first real LPAR to a second real LPAR , before migration , said first virtual machine having an operating system and an application in a first private memory private to said first virtual machine , before migration , said communication queue residing in a first shared memory shared and accessible by said first virtual machine and a second virtual machine in said first LPAR , said prow-am product comprising : a computer readable media ;
first program (first criterion) instructions for execution within said first LPAR to stop said first virtual machine and said second virtual machine in said first LPAR to prevent said first virtual machine and said second virtual machine from updating said communication queue in said first LPAR ;
second program (first criterion) instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate said operating system , said application and said communication queue to said second LPAR , and third program instructions for execution within said second LPAR to write said operating system and application into a second private memory in said second LPAR , and fourth program instructions for execution within said second LPAR to write said communication queue into a second shared memory in said second LPAR ;
and fifth program instructions to allocate said second private memory and other resources in said second LPAR for a migrated virtual machine corresponding to said first virtual machine , and grant said migrated virtual machine access to said communication queue in said second shared memory ;
sixth program instructions for execution within said second virtual machine to supply a work item to said communication queue before the first program instructions stop said first virtual machine and said second virtual machine ;
seventh program instructions for execution within said first LPAR , operable while said first and second virtual machines are stopped , to communicate an operating system and an application of said second virtual machine to said second LPAR , and eighth program instructions for execution within said second LPAR to write said operating system and application of said second virtual machine into a third private memory in said second LPAR ;
and ninth program instructions to allocate said third private memory and other resources in said second LPAR for another migrated virtual machine corresponding to said second virtual machine , and grant said other migrated virtual machine access to said communication queue in said second shared memory ;
and wherein said first , second , third , fourth , fifth , sixth , seventh , eighth and ninth program instructions are stored on said media in functional form .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20050044151A1

Filed: 2004-03-04     Issued: 2005-02-24

Asynchronous mechanism and message pool

(Original Assignee) Messagesoft Inc     (Current Assignee) Messagesoft Inc

Jianguo Jiang, Yaping Liu, Jingwei Liang, Wei Huang, Shijun Wu
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (first message) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel (acknowledging receipt) associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20050044151A1
CLAIM 2
. The method of claim 1 , wherein : receiving includes acknowledging receipt (command channel) of the each of the plurality of messages when received .

US20050044151A1
CLAIM 6
. The method of claim 3 , wherein : receiving a trigger includes determining that a protocol time lag has expired for a first message (first message) received in the plurality of messages .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (first message) sent by the producer worker before storing the first message .
US20050044151A1
CLAIM 6
. The method of claim 3 , wherein : receiving a trigger includes determining that a protocol time lag has expired for a first message (first message) received in the plurality of messages .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (first message) includes deleting the first message .
US20050044151A1
CLAIM 6
. The method of claim 3 , wherein : receiving a trigger includes determining that a protocol time lag has expired for a first message (first message) received in the plurality of messages .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (first message) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel (acknowledging receipt) associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20050044151A1
CLAIM 2
. The method of claim 1 , wherein : receiving includes acknowledging receipt (command channel) of the each of the plurality of messages when received .

US20050044151A1
CLAIM 6
. The method of claim 3 , wherein : receiving a trigger includes determining that a protocol time lag has expired for a first message (first message) received in the plurality of messages .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (first message) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel (acknowledging receipt) associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20050044151A1
CLAIM 2
. The method of claim 1 , wherein : receiving includes acknowledging receipt (command channel) of the each of the plurality of messages when received .

US20050044151A1
CLAIM 6
. The method of claim 3 , wherein : receiving a trigger includes determining that a protocol time lag has expired for a first message (first message) received in the plurality of messages .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (first message) sent by the producer worker before storing the first message .
US20050044151A1
CLAIM 6
. The method of claim 3 , wherein : receiving a trigger includes determining that a protocol time lag has expired for a first message (first message) received in the plurality of messages .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (first message) by deleting the first message .
US20050044151A1
CLAIM 6
. The method of claim 3 , wherein : receiving a trigger includes determining that a protocol time lag has expired for a first message (first message) received in the plurality of messages .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN1508682A

Filed: 2003-12-16     Issued: 2004-06-30

任务调度的方法、系统和设备

(Original Assignee) 国际商业机器公司     

A・康杜, A·康杜
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (一个队列) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (这些请求) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (这些请求) from the consumer worker to the datacenter queue (一个队列) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (一个队列) is configured to hide a requested message upon receiving the message request (这些请求) from the consumer worker .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (一个队列) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (这些请求) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (一个队列) request .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (一个队列) request .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (这些请求) from the consumer worker to the datacenter queue (一个队列) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (一个队列) is configured to hide the requested message upon receiving the message request (这些请求) from the consumer worker .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (一个队列) at least partially stored at a first datacenter (一个队列) location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (系统内) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (这些请求) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内 (second datacenter, second datacenter location) 调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (一个队列) .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (一个队列) request .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (一个队列) request .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter (一个队列) location .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (这些请求) from the consumer worker to the datacenter queue (一个队列) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (一个队列) is configured to hide the requested message upon receiving the message request (这些请求) from the consumer worker .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (first datacenter, datacenter queue) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2004199678A

Filed: 2003-12-12     Issued: 2004-07-15

タスク・スケジューリングの方法、システム、およびプログラム製品

(Original Assignee) Internatl Business Mach Corp <Ibm>; インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation     

Ashish Kundu, アシシュ・クンドゥ
US8954993B2
CLAIM 1
. A method to locally process queue requests (の要求) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
JP2004199678A
CLAIM 8
訂正処置を仮定する前記ステップが、適当なキューが要求を受け入れることができない場合に、前記適当なキューの要求 (queue requests) 処理容量の増加を仮定するステップを含む、請求項6に記載の方法。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (の要求) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
JP2004199678A
CLAIM 8
訂正処置を仮定する前記ステップが、適当なキューが要求を受け入れることができない場合に、前記適当なキューの要求 (queue requests) 処理容量の増加を仮定するステップを含む、請求項6に記載の方法。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (QoS) .
JP2004199678A
CLAIM 22
前記要求が、QoS (datacenter queue request) /SLAおよび前記後続レベル・キューからの情報に基づいて前記第1レベル・キューで分類される、請求項19に記載の装置。

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (QoS) .
JP2004199678A
CLAIM 22
前記要求が、QoS (datacenter queue request) /SLAおよび前記後続レベル・キューからの情報に基づいて前記第1レベル・キューで分類される、請求項19に記載の装置。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (の要求) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
JP2004199678A
CLAIM 8
訂正処置を仮定する前記ステップが、適当なキューが要求を受け入れることができない場合に、前記適当なキューの要求 (queue requests) 処理容量の増加を仮定するステップを含む、請求項6に記載の方法。

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (QoS) .
JP2004199678A
CLAIM 22
前記要求が、QoS (datacenter queue request) /SLAおよび前記後続レベル・キューからの情報に基づいて前記第1レベル・キューで分類される、請求項19に記載の装置。

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (QoS) .
JP2004199678A
CLAIM 22
前記要求が、QoS (datacenter queue request) /SLAおよび前記後続レベル・キューからの情報に基づいて前記第1レベル・キューで分類される、請求項19に記載の装置。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US7337214B2

Filed: 2003-09-26     Issued: 2008-02-26

Caching, clustering and aggregating server

(Original Assignee) YHC Corp     (Current Assignee) YHC Corp

Michael Douglass, Douglas Swarin, Edward Henigin, Jonah Yokubaitis
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server (second server) ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker (storage units) at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US7337214B2
CLAIM 2
. The server system of claim 1 , wherein a second server (second server) of the cluster of servers is retrieves the first requested article from the at least one of the servers in the cluster of servers when the customer requested article has already been requested from the backend servers due to a previous customer request for the first requested article .

US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker and the consumer worker (storage units) are co-located on a multi-core device at the first server .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker and the consumer worker (storage units) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker (storage units) to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (communication network) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US7337214B2
CLAIM 14
. A storage and retrieval system comprising : a plurality of servers forming a server cluster , each server of the plurality of servers having storage space for storing articles and data ;
a communication network (first criterion) allowing each one of the plurality of servers to communicate with each other ;
a backend server comprising storage space for storing articles , the backend server being in communication with the server cluster via a first communication link ;
wherein a first server of the plurality of servers accepts a request for a first article from a customer ;
wherein the first server , via the communication network , queries the plurality of servers for the first article ;
wherein if the first article is found in one of the plurality of servers storage space , the first article is provided to the first server for delivery to the customer ;
and wherein the first article is not found in one of the plurality of server the first server requests the first article from the backend server .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker (storage units) .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US7337214B2
CLAIM 14
. A storage and retrieval system comprising : a plurality of servers forming a server cluster , each server of the plurality of servers having storage space for storing articles and data ;
a communication network (first criterion) allowing each one of the plurality of servers to communicate with each other ;
a backend server comprising storage space for storing articles , the backend server being in communication with the server cluster via a first communication link ;
wherein a first server of the plurality of servers accepts a request for a first article from a customer ;
wherein the first server , via the communication network , queries the plurality of servers for the first article ;
wherein if the first article is found in one of the plurality of servers storage space , the first article is provided to the first server for delivery to the customer ;
and wherein the first article is not found in one of the plurality of server the first server requests the first article from the backend server .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server (second server) ;

and detect a consumer worker (storage units) at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US7337214B2
CLAIM 2
. The server system of claim 1 , wherein a second server (second server) of the cluster of servers is retrieves the first requested article from the at least one of the servers in the cluster of servers when the customer requested article has already been requested from the backend servers due to a previous customer request for the first requested article .

US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (storage spaces) .
US7337214B2
CLAIM 23
. The system of claim 21 , wherein the backend server provides the first article to the first server for delivery to the customer and wherein the first server attempts to store the first article in a first storage space such that , if there are only time interval storage spaces (datacenter queue request) having time intervals newer than a date of the first article , then the first article is not stored in the first server .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (storage spaces) .
US7337214B2
CLAIM 23
. The system of claim 21 , wherein the backend server provides the first article to the first server for delivery to the customer and wherein the first server attempts to store the first article in a first storage space such that , if there are only time interval storage spaces (datacenter queue request) having time intervals newer than a date of the first article , then the first article is not stored in the first server .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker and the consumer worker (storage units) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker (storage units) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (communication network) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US7337214B2
CLAIM 14
. A storage and retrieval system comprising : a plurality of servers forming a server cluster , each server of the plurality of servers having storage space for storing articles and data ;
a communication network (first criterion) allowing each one of the plurality of servers to communicate with each other ;
a backend server comprising storage space for storing articles , the backend server being in communication with the server cluster via a first communication link ;
wherein a first server of the plurality of servers accepts a request for a first article from a customer ;
wherein the first server , via the communication network , queries the plurality of servers for the first article ;
wherein if the first article is found in one of the plurality of servers storage space , the first article is provided to the first server for delivery to the customer ;
and wherein the first article is not found in one of the plurality of server the first server requests the first article from the backend server .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (storage units) .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US7337214B2
CLAIM 14
. A storage and retrieval system comprising : a plurality of servers forming a server cluster , each server of the plurality of servers having storage space for storing articles and data ;
a communication network (first criterion) allowing each one of the plurality of servers to communicate with each other ;
a backend server comprising storage space for storing articles , the backend server being in communication with the server cluster via a first communication link ;
wherein a first server of the plurality of servers accepts a request for a first article from a customer ;
wherein the first server , via the communication network , queries the plurality of servers for the first article ;
wherein if the first article is found in one of the plurality of servers storage space , the first article is provided to the first server for delivery to the customer ;
and wherein the first article is not found in one of the plurality of server the first server requests the first article from the backend server .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker (storage units) that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (storage spaces) .
US7337214B2
CLAIM 23
. The system of claim 21 , wherein the backend server provides the first article to the first server for delivery to the customer and wherein the first server attempts to store the first article in a first storage space such that , if there are only time interval storage spaces (datacenter queue request) having time intervals newer than a date of the first article , then the first article is not stored in the first server .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (storage spaces) .
US7337214B2
CLAIM 23
. The system of claim 21 , wherein the backend server provides the first article to the first server for delivery to the customer and wherein the first server attempts to store the first article in a first storage space such that , if there are only time interval storage spaces (datacenter queue request) having time intervals newer than a date of the first article , then the first article is not stored in the first server .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker (storage units) are co-located on a multi-core device at the first datacenter location .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker (storage units) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (communication network) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US7337214B2
CLAIM 14
. A storage and retrieval system comprising : a plurality of servers forming a server cluster , each server of the plurality of servers having storage space for storing articles and data ;
a communication network (first criterion) allowing each one of the plurality of servers to communicate with each other ;
a backend server comprising storage space for storing articles , the backend server being in communication with the server cluster via a first communication link ;
wherein a first server of the plurality of servers accepts a request for a first article from a customer ;
wherein the first server , via the communication network , queries the plurality of servers for the first article ;
wherein if the first article is found in one of the plurality of servers storage space , the first article is provided to the first server for delivery to the customer ;
and wherein the first article is not found in one of the plurality of server the first server requests the first article from the backend server .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (storage units) .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US7337214B2
CLAIM 14
. A storage and retrieval system comprising : a plurality of servers forming a server cluster , each server of the plurality of servers having storage space for storing articles and data ;
a communication network (first criterion) allowing each one of the plurality of servers to communicate with each other ;
a backend server comprising storage space for storing articles , the backend server being in communication with the server cluster via a first communication link ;
wherein a first server of the plurality of servers accepts a request for a first article from a customer ;
wherein the first server , via the communication network , queries the plurality of servers for the first article ;
wherein if the first article is found in one of the plurality of servers storage space , the first article is provided to the first server for delivery to the customer ;
and wherein the first article is not found in one of the plurality of server the first server requests the first article from the backend server .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP1474746A1

Filed: 2003-02-14     Issued: 2004-11-10

Management of message queues

(Original Assignee) Proquent Systems Corp     (Current Assignee) Proquent Systems Corp

Thomas E. Hamilton, Kevin Kicklighter, Charles R. Davis
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (first message) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
EP1474746A1
CLAIM 25
. A method for passing messages between processes in a distributed system comprising : providing an application programming interface to processes hosted on computers of the distributed system ;
passing a first message (first message) from a first process to a second process hosted on one computer of the distributed system , including passing said message through a shared memory accessible to both the first process and the second process ;
and passing a second message from the first process to a third process hosted on a second computer of the distributed system , including passing said message over a communication channel coupling the first and the second computers .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (first message) sent by the producer worker before storing the first message .
EP1474746A1
CLAIM 25
. A method for passing messages between processes in a distributed system comprising : providing an application programming interface to processes hosted on computers of the distributed system ;
passing a first message (first message) from a first process to a second process hosted on one computer of the distributed system , including passing said message through a shared memory accessible to both the first process and the second process ;
and passing a second message from the first process to a third process hosted on a second computer of the distributed system , including passing said message over a communication channel coupling the first and the second computers .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (first message) includes deleting the first message .
EP1474746A1
CLAIM 25
. A method for passing messages between processes in a distributed system comprising : providing an application programming interface to processes hosted on computers of the distributed system ;
passing a first message (first message) from a first process to a second process hosted on one computer of the distributed system , including passing said message through a shared memory accessible to both the first process and the second process ;
and passing a second message from the first process to a third process hosted on a second computer of the distributed system , including passing said message over a communication channel coupling the first and the second computers .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (first message) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
EP1474746A1
CLAIM 25
. A method for passing messages between processes in a distributed system comprising : providing an application programming interface to processes hosted on computers of the distributed system ;
passing a first message (first message) from a first process to a second process hosted on one computer of the distributed system , including passing said message through a shared memory accessible to both the first process and the second process ;
and passing a second message from the first process to a third process hosted on a second computer of the distributed system , including passing said message over a communication channel coupling the first and the second computers .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (first message) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (one computer) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
EP1474746A1
CLAIM 25
. A method for passing messages between processes in a distributed system comprising : providing an application programming interface to processes hosted on computers of the distributed system ;
passing a first message (first message) from a first process to a second process hosted on one computer (second datacenter) of the distributed system , including passing said message through a shared memory accessible to both the first process and the second process ;
and passing a second message from the first process to a third process hosted on a second computer of the distributed system , including passing said message over a communication channel coupling the first and the second computers .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (first message) sent by the producer worker before storing the first message .
EP1474746A1
CLAIM 25
. A method for passing messages between processes in a distributed system comprising : providing an application programming interface to processes hosted on computers of the distributed system ;
passing a first message (first message) from a first process to a second process hosted on one computer of the distributed system , including passing said message through a shared memory accessible to both the first process and the second process ;
and passing a second message from the first process to a third process hosted on a second computer of the distributed system , including passing said message over a communication channel coupling the first and the second computers .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (first message) by deleting the first message .
EP1474746A1
CLAIM 25
. A method for passing messages between processes in a distributed system comprising : providing an application programming interface to processes hosted on computers of the distributed system ;
passing a first message (first message) from a first process to a second process hosted on one computer of the distributed system , including passing said message through a shared memory accessible to both the first process and the second process ;
and passing a second message from the first process to a third process hosted on a second computer of the distributed system , including passing said message over a communication channel coupling the first and the second computers .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040205770A1

Filed: 2003-02-11     Issued: 2004-10-14

Duplicate message elimination system for a message broker

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Kai Zhang, Kenneth Astl, Subramanyam Gooty, Arul Sundaramurthy
US8954993B2
CLAIM 1
. A method to locally process queue requests (other time) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one (unique message identifier) of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20040205770A1
CLAIM 1
. A method for preventing delivery of duplicate messages in a message system , wherein each message comprises a unique message identifier (queue cache includes one) ;
the method comprising the steps of : polling a message store for messages ;
retrieving from the message store at least one message ;
processing the at least one message ;
retrieving a message identifier from a monitor queue in a transactional server , the message identifier corresponding to the last successfully delivered message ;
and comparing the message identifier retrieved from the monitor queue to the message identifier of the message retrieved from the message store .

US20040205770A1
CLAIM 9
. The message system of claim 7 , wherein the message transfer agent further comprises logic for acknowledging to the message store the transmission of the retrieved message , so that the message store does not serve the message another time (queue requests) , if the compared identifiers match .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (other time) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20040205770A1
CLAIM 9
. The message system of claim 7 , wherein the message transfer agent further comprises logic for acknowledging to the message store the transmission of the retrieved message , so that the message store does not serve the message another time (queue requests) , if the compared identifiers match .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (other time) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20040205770A1
CLAIM 9
. The message system of claim 7 , wherein the message transfer agent further comprises logic for acknowledging to the message store the transmission of the retrieved message , so that the message store does not serve the message another time (queue requests) , if the compared identifiers match .

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one (unique message identifier) of a copy and a partial copy of the datacenter queue .
US20040205770A1
CLAIM 1
. A method for preventing delivery of duplicate messages in a message system , wherein each message comprises a unique message identifier (queue cache includes one) ;
the method comprising the steps of : polling a message store for messages ;
retrieving from the message store at least one message ;
processing the at least one message ;
retrieving a message identifier from a monitor queue in a transactional server , the message identifier corresponding to the last successfully delivered message ;
and comparing the message identifier retrieved from the monitor queue to the message identifier of the message retrieved from the message store .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040117794A1

Filed: 2002-12-17     Issued: 2004-06-17

Method, system and framework for task scheduling

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Ashish Kundu
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (load balancing) sending a first message (exchanging information) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information (first message) amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (exchanging information) sent by the producer worker before storing the first message .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information (first message) amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server (load balancing) .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (exchanging information) includes deleting the first message .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information (first message) amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20040117794A1
CLAIM 31
. A computer program product for scheduling requests in a network , the computer program product comprising : a . first program (first criterion) instruction means for receiving a request at a first level load balancer ;
b . second program (first criterion) instruction means for exchanging information for the request with adjacent level load balancers ;
c . third program instruction means for obtaining a classification value of the request based on the exchanged information ;
and d . fourth program instruction means for scheduling the request based on the classification value .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker .
US20040117794A1
CLAIM 31
. A computer program product for scheduling requests in a network , the computer program product comprising : a . first program (first criterion) instruction means for receiving a request at a first level load balancer ;
b . second program (first criterion) instruction means for exchanging information for the request with adjacent level load balancers ;
c . third program instruction means for obtaining a classification value of the request based on the exchanged information ;
and d . fourth program instruction means for scheduling the request based on the classification value .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server (load balancing) , wherein the producer worker sends a first message (exchanging information) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information (first message) amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (scheduling requests) .
US20040117794A1
CLAIM 18
. A computer program product for scheduling requests (datacenter queue request) in a computing system , the computer program product comprising : a . program instruction means for buffering the request in a queue of first level ;
b . program instruction means for exchanging flow information with other levels adjacent to the first level ;
c . program instruction means for obtaining a classification value of the request ;
and d . program instruction means for scheduling the requests based on the classification value .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (scheduling requests) .
US20040117794A1
CLAIM 18
. A computer program product for scheduling requests (datacenter queue request) in a computing system , the computer program product comprising : a . program instruction means for buffering the request in a queue of first level ;
b . program instruction means for exchanging flow information with other levels adjacent to the first level ;
c . program instruction means for obtaining a classification value of the request ;
and d . program instruction means for scheduling the requests based on the classification value .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20040117794A1
CLAIM 31
. A computer program product for scheduling requests in a network , the computer program product comprising : a . first program (first criterion) instruction means for receiving a request at a first level load balancer ;
b . second program (first criterion) instruction means for exchanging information for the request with adjacent level load balancers ;
c . third program instruction means for obtaining a classification value of the request based on the exchanged information ;
and d . fourth program instruction means for scheduling the request based on the classification value .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20040117794A1
CLAIM 31
. A computer program product for scheduling requests in a network , the computer program product comprising : a . first program (first criterion) instruction means for receiving a request at a first level load balancer ;
b . second program (first criterion) instruction means for exchanging information for the request with adjacent level load balancers ;
c . third program instruction means for obtaining a classification value of the request based on the exchanged information ;
and d . fourth program instruction means for scheduling the request based on the classification value .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (exchanging information) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information (first message) amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (exchanging information) sent by the producer worker before storing the first message .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information (first message) amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (scheduling requests) .
US20040117794A1
CLAIM 18
. A computer program product for scheduling requests (datacenter queue request) in a computing system , the computer program product comprising : a . program instruction means for buffering the request in a queue of first level ;
b . program instruction means for exchanging flow information with other levels adjacent to the first level ;
c . program instruction means for obtaining a classification value of the request ;
and d . program instruction means for scheduling the requests based on the classification value .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (scheduling requests) .
US20040117794A1
CLAIM 18
. A computer program product for scheduling requests (datacenter queue request) in a computing system , the computer program product comprising : a . program instruction means for buffering the request in a queue of first level ;
b . program instruction means for exchanging flow information with other levels adjacent to the first level ;
c . program instruction means for obtaining a classification value of the request ;
and d . program instruction means for scheduling the requests based on the classification value .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (exchanging information) by deleting the first message .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information (first message) amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (second program, first program) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20040117794A1
CLAIM 31
. A computer program product for scheduling requests in a network , the computer program product comprising : a . first program (first criterion) instruction means for receiving a request at a first level load balancer ;
b . second program (first criterion) instruction means for exchanging information for the request with adjacent level load balancers ;
c . third program instruction means for obtaining a classification value of the request based on the exchanged information ;
and d . fourth program instruction means for scheduling the request based on the classification value .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (second program, first program) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20040117794A1
CLAIM 31
. A computer program product for scheduling requests in a network , the computer program product comprising : a . first program (first criterion) instruction means for receiving a request at a first level load balancer ;
b . second program (first criterion) instruction means for exchanging information for the request with adjacent level load balancers ;
c . third program instruction means for obtaining a classification value of the request based on the exchanged information ;
and d . fourth program instruction means for scheduling the request based on the classification value .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040107240A1

Filed: 2002-12-02     Issued: 2004-06-03

Method and system for intertask messaging between multiple processors

(Original Assignee) Conexant Inc     (Current Assignee) Conexant Inc ; Brooktree Broadband Holding Inc

Boris Zabarski, Dorit Pardo, Yaacov Ben-Simon
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (first message) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20040107240A1
CLAIM 34
. A system for communicating messages between processors comprising : a plurality of interconnected processors , each processor including : a first message (first message) queue ;
a first task operably connected to the first message queue ;
a plurality of mediator message queues ;
and a plurality of mediator tasks , each mediator task being operably connected to a different mediator message queue of the plurality of message queues and the first message queue , each mediator task being associated with a different processor of a subset of the plurality of processors , and wherein each mediator task of a processor is adapted to transfer at least one message from the corresponding mediator message queue to the first message queue of the processor during an execution of the mediator task by the processor , the at least one message being stored by a first task of another processor in the corresponding mediator message queue and intended for the first task of the processor .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (first message) sent by the producer worker before storing the first message .
US20040107240A1
CLAIM 34
. A system for communicating messages between processors comprising : a plurality of interconnected processors , each processor including : a first message (first message) queue ;
a first task operably connected to the first message queue ;
a plurality of mediator message queues ;
and a plurality of mediator tasks , each mediator task being operably connected to a different mediator message queue of the plurality of message queues and the first message queue , each mediator task being associated with a different processor of a subset of the plurality of processors , and wherein each mediator task of a processor is adapted to transfer at least one message from the corresponding mediator message queue to the first message queue of the processor during an execution of the mediator task by the processor , the at least one message being stored by a first task of another processor in the corresponding mediator message queue and intended for the first task of the processor .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (first message) includes deleting the first message .
US20040107240A1
CLAIM 34
. A system for communicating messages between processors comprising : a plurality of interconnected processors , each processor including : a first message (first message) queue ;
a first task operably connected to the first message queue ;
a plurality of mediator message queues ;
and a plurality of mediator tasks , each mediator task being operably connected to a different mediator message queue of the plurality of message queues and the first message queue , each mediator task being associated with a different processor of a subset of the plurality of processors , and wherein each mediator task of a processor is adapted to transfer at least one message from the corresponding mediator message queue to the first message queue of the processor during an execution of the mediator task by the processor , the at least one message being stored by a first task of another processor in the corresponding mediator message queue and intended for the first task of the processor .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (first message) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20040107240A1
CLAIM 34
. A system for communicating messages between processors comprising : a plurality of interconnected processors , each processor including : a first message (first message) queue ;
a first task operably connected to the first message queue ;
a plurality of mediator message queues ;
and a plurality of mediator tasks , each mediator task being operably connected to a different mediator message queue of the plurality of message queues and the first message queue , each mediator task being associated with a different processor of a subset of the plurality of processors , and wherein each mediator task of a processor is adapted to transfer at least one message from the corresponding mediator message queue to the first message queue of the processor during an execution of the mediator task by the processor , the at least one message being stored by a first task of another processor in the corresponding mediator message queue and intended for the first task of the processor .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM (multiple processors) and sends a first message (first message) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM (multiple processors) and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20040107240A1
CLAIM 10
. A system for communicating at least one message between multiple processors (first VM, second VM, second VMs) , the system comprising : a first processor ;
a first queue being adapted to store at least one message intended for a first task of the first processor ;
a second queue being adapted to store at least one message from at least one task of a second processor , the at least one message being intended for the first task of the first processor ;
and a first mediator task being adapted to transfer the at least one message intended for the first task from the second queue to the first queue during an execution of the first mediator task by the first processor .

US20040107240A1
CLAIM 34
. A system for communicating messages between processors comprising : a plurality of interconnected processors , each processor including : a first message (first message) queue ;
a first task operably connected to the first message queue ;
a plurality of mediator message queues ;
and a plurality of mediator tasks , each mediator task being operably connected to a different mediator message queue of the plurality of message queues and the first message queue , each mediator task being associated with a different processor of a subset of the plurality of processors , and wherein each mediator task of a processor is adapted to transfer at least one message from the corresponding mediator message queue to the first message queue of the processor during an execution of the mediator task by the processor , the at least one message being stored by a first task of another processor in the corresponding mediator message queue and intended for the first task of the processor .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (first message) sent by the producer worker before storing the first message .
US20040107240A1
CLAIM 34
. A system for communicating messages between processors comprising : a plurality of interconnected processors , each processor including : a first message (first message) queue ;
a first task operably connected to the first message queue ;
a plurality of mediator message queues ;
and a plurality of mediator tasks , each mediator task being operably connected to a different mediator message queue of the plurality of message queues and the first message queue , each mediator task being associated with a different processor of a subset of the plurality of processors , and wherein each mediator task of a processor is adapted to transfer at least one message from the corresponding mediator message queue to the first message queue of the processor during an execution of the mediator task by the processor , the at least one message being stored by a first task of another processor in the corresponding mediator message queue and intended for the first task of the processor .

US8954993B2
CLAIM 20
. The datacenter of claim 14 , wherein the first and second VMs (multiple processors) are configured to execute on the same physical machine .
US20040107240A1
CLAIM 10
. A system for communicating at least one message between multiple processors (first VM, second VM, second VMs) , the system comprising : a first processor ;
a first queue being adapted to store at least one message intended for a first task of the first processor ;
a second queue being adapted to store at least one message from at least one task of a second processor , the at least one message being intended for the first task of the first processor ;
and a first mediator task being adapted to transfer the at least one message intended for the first task from the second queue to the first queue during an execution of the first mediator task by the first processor .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (first message) by deleting the first message .
US20040107240A1
CLAIM 34
. A system for communicating messages between processors comprising : a plurality of interconnected processors , each processor including : a first message (first message) queue ;
a first task operably connected to the first message queue ;
a plurality of mediator message queues ;
and a plurality of mediator tasks , each mediator task being operably connected to a different mediator message queue of the plurality of message queues and the first message queue , each mediator task being associated with a different processor of a subset of the plurality of processors , and wherein each mediator task of a processor is adapted to transfer at least one message from the corresponding mediator message queue to the first message queue of the processor during an execution of the mediator task by the processor , the at least one message being stored by a first task of another processor in the corresponding mediator message queue and intended for the first task of the processor .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20030014551A1

Filed: 2002-08-21     Issued: 2003-01-16

Framework system

(Original Assignee) Future System Consulting Corp     (Current Assignee) Future Architect Inc

Kunihito Ishibashi, Mitsuru Maeshima, Narihiro Okumura, Isao Sakashita
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (low definition) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20030014551A1
CLAIM 1
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (low definition) sent by the producer worker before storing the first message .
US20030014551A1
CLAIM 1
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (low definition) includes deleting the first message .
US20030014551A1
CLAIM 1
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage (more set) detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (low definition) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20030014551A1
CLAIM 1
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US20030014551A1
CLAIM 17
. A method of operating a framework system connected so as to be capable of communication with one or more clients , said method comprising : a step wherein one or more flow definition files indicating one or more business logic execution schedules respectively corresponding to a plurality of different subject IDs is or are prepared ;
a step wherein at least one request message having a particular subject ID or IDs is received from at least one of the client or clients ;
a step wherein at least one business logic execution schedule present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the received request message or messages is or are referenced ;
a step wherein one or more set (queue usage) s of business logic is or are selected from among a plurality of previously prepared sets of business logic in accordance with at least one of the referenced business logic execution schedule or schedules ;
a step wherein one or more of the selected set or sets of business logic is or are executed ;
and a step wherein one or more reply messages is or are returned to at least one of the client or clients .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage (more set) based on at least one observed datacenter queue request .
US20030014551A1
CLAIM 17
. A method of operating a framework system connected so as to be capable of communication with one or more clients , said method comprising : a step wherein one or more flow definition files indicating one or more business logic execution schedules respectively corresponding to a plurality of different subject IDs is or are prepared ;
a step wherein at least one request message having a particular subject ID or IDs is received from at least one of the client or clients ;
a step wherein at least one business logic execution schedule present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the received request message or messages is or are referenced ;
a step wherein one or more set (queue usage) s of business logic is or are selected from among a plurality of previously prepared sets of business logic in accordance with at least one of the referenced business logic execution schedule or schedules ;
a step wherein one or more of the selected set or sets of business logic is or are executed ;
and a step wherein one or more reply messages is or are returned to at least one of the client or clients .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage (more set) detector module is further configured to observe the at least one observed datacenter queue request .
US20030014551A1
CLAIM 17
. A method of operating a framework system connected so as to be capable of communication with one or more clients , said method comprising : a step wherein one or more flow definition files indicating one or more business logic execution schedules respectively corresponding to a plurality of different subject IDs is or are prepared ;
a step wherein at least one request message having a particular subject ID or IDs is received from at least one of the client or clients ;
a step wherein at least one business logic execution schedule present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the received request message or messages is or are referenced ;
a step wherein one or more set (queue usage) s of business logic is or are selected from among a plurality of previously prepared sets of business logic in accordance with at least one of the referenced business logic execution schedule or schedules ;
a step wherein one or more of the selected set or sets of business logic is or are executed ;
and a step wherein one or more reply messages is or are returned to at least one of the client or clients .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (low definition) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (queue management) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20030014551A1
CLAIM 1
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US20030014551A1
CLAIM 16
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : one or more framework services , one or more of which is or are capable of processing one or more request messages from at least one of the client or clients and of outputting one or more reply messages to at least one of the client or clients ;
and one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
the request message or messages being prioritized in a particular fashion ;
at least one of the messaging service or services comprising one or more message queues capable of temporarily delaying at least one of the request message or messages and one or more queue management (second datacenter, second datacenter location) components capable of managing input and/or output of at least one of the message queue or queues ;
and at least one of the queue management component or components being provided with a prioritized mode by which , at one or more times when a plurality of messages have been stored in one or more of the message queue or queues , the order or orders in which the plurality of messages are output from the message queue or queues is or are controlled in correspondence to the respective priority or priorities of the respective message or messages , and with a sequence protection mode by which , at one or more times when a plurality of messages have been stored in one or more of the message queue or queues , retrieval of one or more other messages stored in the message queue or queues is prohibited until completion of processing at one or more of the framework service or services of at least one message previously retrieved from the message queue or queues .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (low definition) sent by the producer worker before storing the first message .
US20030014551A1
CLAIM 1
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage (more set) based on at least one observed datacenter queue request .
US20030014551A1
CLAIM 17
. A method of operating a framework system connected so as to be capable of communication with one or more clients , said method comprising : a step wherein one or more flow definition files indicating one or more business logic execution schedules respectively corresponding to a plurality of different subject IDs is or are prepared ;
a step wherein at least one request message having a particular subject ID or IDs is received from at least one of the client or clients ;
a step wherein at least one business logic execution schedule present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the received request message or messages is or are referenced ;
a step wherein one or more set (queue usage) s of business logic is or are selected from among a plurality of previously prepared sets of business logic in accordance with at least one of the referenced business logic execution schedule or schedules ;
a step wherein one or more of the selected set or sets of business logic is or are executed ;
and a step wherein one or more reply messages is or are returned to at least one of the client or clients .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (low definition) by deleting the first message .
US20030014551A1
CLAIM 1
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20030055668A1

Filed: 2002-08-08     Issued: 2003-03-20

Workflow engine for automating business processes in scalable multiprocessor computer platforms

(Original Assignee) TriVium Systems Inc     (Current Assignee) TriVium Systems Inc

Amitabh Saran, Sanjay Suri, Purushottaman Balakrishnan, Shashidhar Kamath
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (first message) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20030055668A1
CLAIM 1
. A system for executing a workflow comprising : a workflow engine operable to receive an input message having a characteristic and data , the workflow engine operable to implement a predetermined finite state machine , based on the characteristic of the input message ;
the workflow engine operable to transmit a first message (first message) including a first header and a first data set to a first object based on a first requirement of the predetermined finite state machine , and receive a second message from the first object , the first object having a first function , and operable to receive the first message from the workflow engine , execute the first function based on the first data set , generate a second message including a second header and second data set representing a result of the executed first function , and transmit the second message to the workflow engine ;
a message platform operable to transfer first and second messages between the first object and the workflow engine .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (first message) sent by the producer worker before storing the first message .
US20030055668A1
CLAIM 1
. A system for executing a workflow comprising : a workflow engine operable to receive an input message having a characteristic and data , the workflow engine operable to implement a predetermined finite state machine , based on the characteristic of the input message ;
the workflow engine operable to transmit a first message (first message) including a first header and a first data set to a first object based on a first requirement of the predetermined finite state machine , and receive a second message from the first object , the first object having a first function , and operable to receive the first message from the workflow engine , execute the first function based on the first data set , generate a second message including a second header and second data set representing a result of the executed first function , and transmit the second message to the workflow engine ;
a message platform operable to transfer first and second messages between the first object and the workflow engine .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (first message) includes deleting the first message .
US20030055668A1
CLAIM 1
. A system for executing a workflow comprising : a workflow engine operable to receive an input message having a characteristic and data , the workflow engine operable to implement a predetermined finite state machine , based on the characteristic of the input message ;
the workflow engine operable to transmit a first message (first message) including a first header and a first data set to a first object based on a first requirement of the predetermined finite state machine , and receive a second message from the first object , the first object having a first function , and operable to receive the first message from the workflow engine , execute the first function based on the first data set , generate a second message including a second header and second data set representing a result of the executed first function , and transmit the second message to the workflow engine ;
a message platform operable to transfer first and second messages between the first object and the workflow engine .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (first message) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20030055668A1
CLAIM 1
. A system for executing a workflow comprising : a workflow engine operable to receive an input message having a characteristic and data , the workflow engine operable to implement a predetermined finite state machine , based on the characteristic of the input message ;
the workflow engine operable to transmit a first message (first message) including a first header and a first data set to a first object based on a first requirement of the predetermined finite state machine , and receive a second message from the first object , the first object having a first function , and operable to receive the first message from the workflow engine , execute the first function based on the first data set , generate a second message including a second header and second data set representing a result of the executed first function , and transmit the second message to the workflow engine ;
a message platform operable to transfer first and second messages between the first object and the workflow engine .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (third data set) configured to : detect a producer worker that is executed on a first VM (second function) and sends a first message (first message) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20030055668A1
CLAIM 1
. A system for executing a workflow comprising : a workflow engine operable to receive an input message having a characteristic and data , the workflow engine operable to implement a predetermined finite state machine , based on the characteristic of the input message ;
the workflow engine operable to transmit a first message (first message) including a first header and a first data set to a first object based on a first requirement of the predetermined finite state machine , and receive a second message from the first object , the first object having a first function , and operable to receive the first message from the workflow engine , execute the first function based on the first data set , generate a second message including a second header and second data set representing a result of the executed first function , and transmit the second message to the workflow engine ;
a message platform operable to transfer first and second messages between the first object and the workflow engine .

US20030055668A1
CLAIM 2
. A system for executing a workflow according to claim 1 wherein , the workflow engine is operable to transmit a third message including a third header and third data set (datacenter controller) to a second object based on a second requirement of the predetermined finite state machine , the identity of the second object determined based on the second data set , the second object having a second function (first VM) , and operable to receive the third message from the workflow engine , execute the second function based on the third data set , generate a fourth message including a fourth header and a fourth data set , and transmit the fourth message to the workflow engine .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (first message) sent by the producer worker before storing the first message .
US20030055668A1
CLAIM 1
. A system for executing a workflow comprising : a workflow engine operable to receive an input message having a characteristic and data , the workflow engine operable to implement a predetermined finite state machine , based on the characteristic of the input message ;
the workflow engine operable to transmit a first message (first message) including a first header and a first data set to a first object based on a first requirement of the predetermined finite state machine , and receive a second message from the first object , the first object having a first function , and operable to receive the first message from the workflow engine , execute the first function based on the first data set , generate a second message including a second header and second data set representing a result of the executed first function , and transmit the second message to the workflow engine ;
a message platform operable to transfer first and second messages between the first object and the workflow engine .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (first message) by deleting the first message .
US20030055668A1
CLAIM 1
. A system for executing a workflow comprising : a workflow engine operable to receive an input message having a characteristic and data , the workflow engine operable to implement a predetermined finite state machine , based on the characteristic of the input message ;
the workflow engine operable to transmit a first message (first message) including a first header and a first data set to a first object based on a first requirement of the predetermined finite state machine , and receive a second message from the first object , the first object having a first function , and operable to receive the first message from the workflow engine , execute the first function based on the first data set , generate a second message including a second header and second data set representing a result of the executed first function , and transmit the second message to the workflow engine ;
a message platform operable to transfer first and second messages between the first object and the workflow engine .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20030097457A1

Filed: 2002-08-08     Issued: 2003-05-22

Scalable multiprocessor architecture for business computer platforms

(Original Assignee) Amitabh Saran; Mathews Manaloor; Arun Maheshwari; Sanjay Suri; Tarak Goradia     

Amitabh Saran, Mathews Manaloor, Arun Maheshwari, Sanjay Suri, Tarak Goradia
US8954993B2
CLAIM 1
. A method to locally process queue requests (exchanging messages) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (message request) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20030097457A1
CLAIM 2
. A scalable software architecture according to claim 1 and further comprising a message interface coupled to the messaging platform for exchanging messages (queue requests) between the messaging platform and a third-party application .

US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (message request) from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request (message request) from the consumer worker .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (exchanging messages) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20030097457A1
CLAIM 2
. A scalable software architecture according to claim 1 and further comprising a message interface coupled to the messaging platform for exchanging messages (queue requests) between the messaging platform and a third-party application .

US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (message request) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (message request) from the consumer worker .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (exchanging messages) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (message request) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20030097457A1
CLAIM 2
. A scalable software architecture according to claim 1 and further comprising a message interface coupled to the messaging platform for exchanging messages (queue requests) between the messaging platform and a third-party application .

US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (message request) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (message request) from the consumer worker .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040019643A1

Filed: 2002-07-23     Issued: 2004-01-29

Remote command server

(Original Assignee) Canon Inc     (Current Assignee) Canon Inc

Robert Zirnstein
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (predetermined location) at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (email address data) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (predetermined location) before storing the first message .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (predetermined location) and the consumer worker are co-located on a multi-core device at the first server .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (predetermined location) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (email address data) from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request (email address data) from the consumer worker .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (predetermined location) at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (email address data) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (email address data) .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (email address data) .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (predetermined location) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (email address data) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (email address data) from the consumer worker .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (predetermined location) that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (email address data) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (predetermined location) before storing the first message .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (email address data) .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (email address data) .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (predetermined location) and the consumer worker are co-located on a multi-core device at the first datacenter location .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (email address data) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (email address data) from the consumer worker .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request, datacenter queue request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20020131089A1

Filed: 2002-03-12     Issued: 2002-09-19

Internet facsimile machine, and internet facsimile communication method

(Original Assignee) Murata Machinery Ltd     (Current Assignee) Murata Machinery Ltd

Yoshifumi Tanimoto
US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (control unit) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location (printing instruction) different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20020131089A1
CLAIM 5
. The Internet facsimile machine according to claim 2 , wherein the processing instruction includes one of : printing instruction (second datacenter location) of the image data ;
facsimile forwarding instruction of the image data ;
and local distributing instruction of the image data .

US20020131089A1
CLAIM 16
. The Internet facsimile machine according to claim 10 , further including : means for receiving electronic mail ;
and a control unit (datacenter controller) for determining processing of the received electronic mail based on a keyword attached to the received electronic mail .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN1437146A

Filed: 2002-02-05     Issued: 2003-08-20

撰写、浏览、答复、转发电子邮件的方法和电子邮件客户机

(Original Assignee) 国际商业机器公司     

叶天正, 杨力平, 张雷
US8954993B2
CLAIM 1
. A method to locally process queue requests (包含的) from co-located workers (电子邮件系统) in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN1437146A
CLAIM 1
. 用于在电子邮件系统 (co-located workers) 中撰写新邮件的方法,包括如下步骤:用户撰写一个新邮件;生成一个Global-ID,并将该Global-ID指定给该邮件;发送并保存该邮件。

CN1437146A
CLAIM 8
. 用于在电子邮件系统中浏览一个邮件的方法,该邮件包含有一个Global-ID和一个Reply-to-ID,该方法包括如下步骤:用户打开并浏览所述邮件;将该邮件中包含的 (queue requests) 内容呈现给用户;取出该邮件的Reply-to-ID;判断取出的Reply-to-ID是否为空;在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件;将找到的邮件的内容包括在被浏览的邮件中,呈现给用户;取出找到的邮件的Reply-to-ID;重复判断、查找、包含和取出找到邮件的Reply-to-ID的步骤,直到取出的Reply-to-ID为空或无法在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (包含的) from co-located workers (电子邮件系统) in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN1437146A
CLAIM 1
. 用于在电子邮件系统 (co-located workers) 中撰写新邮件的方法,包括如下步骤:用户撰写一个新邮件;生成一个Global-ID,并将该Global-ID指定给该邮件;发送并保存该邮件。

CN1437146A
CLAIM 8
. 用于在电子邮件系统中浏览一个邮件的方法,该邮件包含有一个Global-ID和一个Reply-to-ID,该方法包括如下步骤:用户打开并浏览所述邮件;将该邮件中包含的 (queue requests) 内容呈现给用户;取出该邮件的Reply-to-ID;判断取出的Reply-to-ID是否为空;在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件;将找到的邮件的内容包括在被浏览的邮件中,呈现给用户;取出找到的邮件的Reply-to-ID;重复判断、查找、包含和取出找到邮件的Reply-to-ID的步骤,直到取出的Reply-to-ID为空或无法在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (包含的) from co-located workers (电子邮件系统) in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM (邮件浏览) and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN1437146A
CLAIM 1
. 用于在电子邮件系统 (co-located workers) 中撰写新邮件的方法,包括如下步骤:用户撰写一个新邮件;生成一个Global-ID,并将该Global-ID指定给该邮件;发送并保存该邮件。

CN1437146A
CLAIM 8
. 用于在电子邮件系统中浏览一个邮件的方法,该邮件包含有一个Global-ID和一个Reply-to-ID,该方法包括如下步骤:用户打开并浏览所述邮件;将该邮件中包含的 (queue requests) 内容呈现给用户;取出该邮件的Reply-to-ID;判断取出的Reply-to-ID是否为空;在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件;将找到的邮件的内容包括在被浏览的邮件中,呈现给用户;取出找到的邮件的Reply-to-ID;重复判断、查找、包含和取出找到邮件的Reply-to-ID的步骤,直到取出的Reply-to-ID为空或无法在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件。

CN1437146A
CLAIM 9
. 电子邮件系统中的电子邮件客户机,包括一个收件存储库、一个已发邮件存储库、邮件浏览 (first VM) 装置和邮件编辑装置,该所述电子邮件客户机还包括:Global-ID生成装置,用于生成能够唯一标识一个邮件的Global-ID,并指定给邮件编辑装置中被编辑的新邮件;Reply-to-ID指定装置,用于为邮件编辑装置中编辑的邮件指定Reply-to-ID;邮件查找装置,用于在存储库中查找具有对应的Global-ID的邮件;邮件恢复装置,用于根据邮件查找装置的查找结果将相应邮件恢复出来给邮件浏览装置。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP1347390A1

Filed: 2001-12-27     Issued: 2003-09-24

Framework system

(Original Assignee) Future System Consulting Corp     (Current Assignee) Future System Consulting Corp

K. c/o Future System Consulting Corp. ISHIBASHI, M c/o Future System Consulting Corp. MAESHIMA, N. c/o Future System Consulting Corp. OKUMURA, Isao c/o Future System Consulting Corp SAKASHITA, Yoko c/o Future System Consulting Corp. IGAKURA
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (low definition) to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
EP1347390A1
CLAIM 1
A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (low definition) sent by the producer worker before storing the first message .
EP1347390A1
CLAIM 1
A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (low definition) includes deleting the first message .
EP1347390A1
CLAIM 1
A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage (more set) detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (low definition) to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
EP1347390A1
CLAIM 1
A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

EP1347390A1
CLAIM 17
A method of operating a framework system connected so as to be capable of communication with one or more clients , said method comprising : a step wherein one or more flow definition files indicating one or more business logic execution schedules respectively corresponding to a plurality of different subject IDs is or are prepared ;
a step wherein at least one request message having a particular subject ID or IDs is received from at least one of the client or clients ;
a step wherein at least one business logic execution schedule present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the received request message or messages is or are referenced ;
a step wherein one or more set (queue usage) s of business logic is or are selected from among a plurality of previously prepared sets of business logic in accordance with at least one of the referenced business logic execution schedule or schedules ;
a step wherein one or more of the selected set or sets of business logic is or are executed ;
and a step wherein one or more reply messages is or are returned to at least one of the client or clients .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage (more set) based on at least one observed datacenter queue request .
EP1347390A1
CLAIM 17
A method of operating a framework system connected so as to be capable of communication with one or more clients , said method comprising : a step wherein one or more flow definition files indicating one or more business logic execution schedules respectively corresponding to a plurality of different subject IDs is or are prepared ;
a step wherein at least one request message having a particular subject ID or IDs is received from at least one of the client or clients ;
a step wherein at least one business logic execution schedule present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the received request message or messages is or are referenced ;
a step wherein one or more set (queue usage) s of business logic is or are selected from among a plurality of previously prepared sets of business logic in accordance with at least one of the referenced business logic execution schedule or schedules ;
a step wherein one or more of the selected set or sets of business logic is or are executed ;
and a step wherein one or more reply messages is or are returned to at least one of the client or clients .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage (more set) detector module is further configured to observe the at least one observed datacenter queue request .
EP1347390A1
CLAIM 17
A method of operating a framework system connected so as to be capable of communication with one or more clients , said method comprising : a step wherein one or more flow definition files indicating one or more business logic execution schedules respectively corresponding to a plurality of different subject IDs is or are prepared ;
a step wherein at least one request message having a particular subject ID or IDs is received from at least one of the client or clients ;
a step wherein at least one business logic execution schedule present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the received request message or messages is or are referenced ;
a step wherein one or more set (queue usage) s of business logic is or are selected from among a plurality of previously prepared sets of business logic in accordance with at least one of the referenced business logic execution schedule or schedules ;
a step wherein one or more of the selected set or sets of business logic is or are executed ;
and a step wherein one or more reply messages is or are returned to at least one of the client or clients .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (low definition) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (queue management) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
EP1347390A1
CLAIM 1
A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

EP1347390A1
CLAIM 16
A framework system connected so as to be capable of communication with one or more clients , said system comprising : one or more framework services , one or more of which is or are capable of processing one or more request messages from at least one of the client or clients and of outputting one or more reply messages to at least one of the client or clients ;
and one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
the request message or messages being prioritized in a particular fashion ;
at least one of the messaging service or services comprising one or more message queues capable of temporarily delaying at least one of the request message or messages and one or more queue management (second datacenter, second datacenter location) components capable of managing input and/or output of at least one of the message queue or queues ;
and at least one of the queue management component or components being provided with a prioritized mode by which , at one or more times when a plurality of messages have been stored in one or more of the message queue or queues , the order or orders in which the plurality of messages are output from the message queue or queues is or are controlled in correspondence to the respective priority or priorities of the respective message or messages , and with a sequence protection mode by which , at one or more times when a plurality of messages have been stored in one or more of the message queue or queues , retrieval of one or more other messages stored in the message queue or queues is prohibited until completion of processing at one or more of the framework service or services of at least one message previously retrieved from the message queue or queues .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (low definition) sent by the producer worker before storing the first message .
EP1347390A1
CLAIM 1
A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage (more set) based on at least one observed datacenter queue request .
EP1347390A1
CLAIM 17
A method of operating a framework system connected so as to be capable of communication with one or more clients , said method comprising : a step wherein one or more flow definition files indicating one or more business logic execution schedules respectively corresponding to a plurality of different subject IDs is or are prepared ;
a step wherein at least one request message having a particular subject ID or IDs is received from at least one of the client or clients ;
a step wherein at least one business logic execution schedule present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the received request message or messages is or are referenced ;
a step wherein one or more set (queue usage) s of business logic is or are selected from among a plurality of previously prepared sets of business logic in accordance with at least one of the referenced business logic execution schedule or schedules ;
a step wherein one or more of the selected set or sets of business logic is or are executed ;
and a step wherein one or more reply messages is or are returned to at least one of the client or clients .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (low definition) by deleting the first message .
EP1347390A1
CLAIM 1
A framework system connected so as to be capable of communication with one or more clients , said system comprising : a plurality of sets of business logic ;
one or more framework services , one or more of which is or are associated with at least one of the sets of business logic and which , responsive to one or more request messages from at least one of the client or clients , is or are capable of executing one or more selected sets among the sets of business logic and outputting one or more reply messages to at least one of the client or clients ;
one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
and one or more flow definition (first message) files associated with one or more of the framework service or services ;
at least one of the request message or messages comprising at least one subject ID identifying at least one subject of at least one of the request message or messages ;
at least one of the flow definition file or files comprising a plurality of definition sentences respectively corresponding to a plurality of different subject IDs , each such definition sentence indicating one or more schedules for execution of one or more prescribed sets of business logic ;
and at least one of the framework service or services , upon receipt of at least one of the request message or messages from the messaging service or services , referencing at least one definition sentence present within at least one of the definition file or files and corresponding to at least one subject ID of at least one of the request message or messages , and selecting at least one set of business logic for execution in accordance with at least one execution schedule indicated by at least one of the referenced definition sentence or sentences .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20020120696A1

Filed: 2001-04-06     Issued: 2002-08-29

System and method for pushing information from a host system to a mobile data communication device

(Original Assignee) Research in Motion Ltd     (Current Assignee) BlackBerry Ltd

Gary Mousseau, Tabitha Ferguson, Barry Linkert, Raymond Veen, William Castell, Mihal Lazaridis
US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion (communication network) is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
US20020120696A1
CLAIM 3
. The method of claim 1 , further comprising the step of initiating communicating between the first and second systems by opening a connecting via a wireless data communication network (first criterion) .

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker .
US20020120696A1
CLAIM 3
. The method of claim 1 , further comprising the step of initiating communicating between the first and second systems by opening a connecting via a wireless data communication network (first criterion) .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (communication network) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20020120696A1
CLAIM 3
. The method of claim 1 , further comprising the step of initiating communicating between the first and second systems by opening a connecting via a wireless data communication network (first criterion) .

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20020120696A1
CLAIM 3
. The method of claim 1 , further comprising the step of initiating communicating between the first and second systems by opening a connecting via a wireless data communication network (first criterion) .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location (corresponding locations) ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20020120696A1
CLAIM 20
. A method of redirecting data between a first device and a second device , comprising the steps of : providing a first storage hierarchy at the first device ;
providing a second storage hierarchy at the second device ;
redirecting a plurality of data items from the first device to the second device , each data item including a location indicator within the first storage hierarchy ;
storing the redirected data items in corresponding locations (first datacenter location) within the second storage hierarchy using the location indicators .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter location (corresponding locations) .
US20020120696A1
CLAIM 20
. A method of redirecting data between a first device and a second device , comprising the steps of : providing a first storage hierarchy at the first device ;
providing a second storage hierarchy at the second device ;
redirecting a plurality of data items from the first device to the second device , each data item including a location indicator within the first storage hierarchy ;
storing the redirected data items in corresponding locations (first datacenter location) within the second storage hierarchy using the location indicators .

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion (communication network) is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20020120696A1
CLAIM 3
. The method of claim 1 , further comprising the step of initiating communicating between the first and second systems by opening a connecting via a wireless data communication network (first criterion) .

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion (communication network) includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker .
US20020120696A1
CLAIM 3
. The method of claim 1 , further comprising the step of initiating communicating between the first and second systems by opening a connecting via a wireless data communication network (first criterion) .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2001285287A

Filed: 2001-02-19     Issued: 2001-10-12

プレフィルタリング及びポストフィルタリングを利用したパブリッシュ/サブスクライブ装置及び方法

(Original Assignee) Agilent Technol Inc; アジレント・テクノロジーズ・インク     

Jerremy Holland, Graham S Pollock, Joseph S Sventek, グラハム・エス・ポロック, ジェレミー・ホランド, ジョセフ・エス・スヴェンティック
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (クライアント) sending a first message to a datacenter queue at least partially stored at a second server (クライアント) ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
JP2001285287A
CLAIM 10
【請求項10】 第一のフィルタを有し、前記第一のフ ィルタを使用して申し込まれたメッセージタイプに対応 するチャネルインスタンスを生成するように作動してい るパブリッシャクライアント (first server, second server) と、 第二のフィルタを有し、あるメッセージタイプを申し込 み、前記対応するチャネルインスタンス内に含むメッセ ージを受信するように作動しているサブスクライバクラ イアントであって、前記第二のフィルタが前記メッセー ジタイプの属性を利用して前記特定のメッセージタイプ のインスタンスをフィルタリングするように作動してい る前記サブスクライバクライアントと、 前記パブリッシャクライアントと前記サブスクライバク ライアントとの間に伸びる通信経路と、 前記通信経路中にあり、サブスクライバクライアントが 第二のフィルタを介して受信用の前記対応するチャネル インスタンスを受信するように作動しているパブリッシ ュ/サブスクライブ機構とを含んでなるパブリッシュ/サ ブスクライブ装置。

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server (クライアント) .
JP2001285287A
CLAIM 10
【請求項10】 第一のフィルタを有し、前記第一のフ ィルタを使用して申し込まれたメッセージタイプに対応 するチャネルインスタンスを生成するように作動してい るパブリッシャクライアント (first server, second server) と、 第二のフィルタを有し、あるメッセージタイプを申し込 み、前記対応するチャネルインスタンス内に含むメッセ ージを受信するように作動しているサブスクライバクラ イアントであって、前記第二のフィルタが前記メッセー ジタイプの属性を利用して前記特定のメッセージタイプ のインスタンスをフィルタリングするように作動してい る前記サブスクライバクライアントと、 前記パブリッシャクライアントと前記サブスクライバク ライアントとの間に伸びる通信経路と、 前記通信経路中にあり、サブスクライバクライアントが 第二のフィルタを介して受信用の前記対応するチャネル インスタンスを受信するように作動しているパブリッシ ュ/サブスクライブ機構とを含んでなるパブリッシュ/サ ブスクライブ装置。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server (クライアント) , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server (クライアント) ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module (スクライブ装置) configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
JP2001285287A
CLAIM 2
【請求項2】 前記第一のフィルタは、前記サブスクラ イバが申し込んだ要求したメッセージタイプを識別する ように構成した回路を含み、これにより前記パブリッシ ャが生成する前記メッセージインスタンスを識別したメ ッセージタイプを含んでいる請求項1に記載のパブリッ シュ/サブスクライブ装置 (processing module)

JP2001285287A
CLAIM 10
【請求項10】 第一のフィルタを有し、前記第一のフ ィルタを使用して申し込まれたメッセージタイプに対応 するチャネルインスタンスを生成するように作動してい るパブリッシャクライアント (first server, second server) と、 第二のフィルタを有し、あるメッセージタイプを申し込 み、前記対応するチャネルインスタンス内に含むメッセ ージを受信するように作動しているサブスクライバクラ イアントであって、前記第二のフィルタが前記メッセー ジタイプの属性を利用して前記特定のメッセージタイプ のインスタンスをフィルタリングするように作動してい る前記サブスクライバクライアントと、 前記パブリッシャクライアントと前記サブスクライバク ライアントとの間に伸びる通信経路と、 前記通信経路中にあり、サブスクライバクライアントが 第二のフィルタを介して受信用の前記対応するチャネル インスタンスを受信するように作動しているパブリッシ ュ/サブスクライブ機構とを含んでなるパブリッシュ/サ ブスクライブ装置。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module (スクライブ装置) is further configured to build a table of queue usage based on at least one observed datacenter queue request .
JP2001285287A
CLAIM 2
【請求項2】 前記第一のフィルタは、前記サブスクラ イバが申し込んだ要求したメッセージタイプを識別する ように構成した回路を含み、これにより前記パブリッシ ャが生成する前記メッセージインスタンスを識別したメ ッセージタイプを含んでいる請求項1に記載のパブリッ シュ/サブスクライブ装置 (processing module)

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module (スクライブ装置) is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
JP2001285287A
CLAIM 2
【請求項2】 前記第一のフィルタは、前記サブスクラ イバが申し込んだ要求したメッセージタイプを識別する ように構成した回路を含み、これにより前記パブリッシ ャが生成する前記メッセージインスタンスを識別したメ ッセージタイプを含んでいる請求項1に記載のパブリッ シュ/サブスクライブ装置 (processing module)




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20020120664A1

Filed: 2000-12-15     Issued: 2002-08-29

Scalable transaction processing pipeline

(Original Assignee) Aristos Logic Corp     (Current Assignee) Aristos Logic Corp

Robert Horn, Virgil Wilkins, Mark Myran, David Walls, Gnanashanmugam Elumalai, U?apos;Tee Cheah
US8954993B2
CLAIM 1
. A method to locally process queue requests (logical block address) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
US20020120664A1
CLAIM 18
. The system of claim 1 wherein at least one of the processing elements maps data addresses to logical block address (queue requests) es of a disk drive .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (logical block address) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module (integrated circuit) configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20020120664A1
CLAIM 18
. The system of claim 1 wherein at least one of the processing elements maps data addresses to logical block address (queue requests) es of a disk drive .

US20020120664A1
CLAIM 21
. The system of claim 1 further wherein the processing elements , interconnect , and the data managers comprise a single integrated circuit (processing module) .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module (integrated circuit) is further configured to build a table of queue usage based on at least one observed datacenter queue request .
US20020120664A1
CLAIM 21
. The system of claim 1 further wherein the processing elements , interconnect , and the data managers comprise a single integrated circuit (processing module) .

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module (integrated circuit) is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
US20020120664A1
CLAIM 21
. The system of claim 1 further wherein the processing elements , interconnect , and the data managers comprise a single integrated circuit (processing module) .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (logical block address) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (queue management) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
US20020120664A1
CLAIM 18
. The system of claim 1 wherein at least one of the processing elements maps data addresses to logical block address (queue requests) es of a disk drive .

US20020120664A1
CLAIM 25
. The system of claim 1 wherein the tasks are selected from the group consisting of : RAID requests ;
queue management (second datacenter, second datacenter location) commands , cache data request , read data requests , write data requests , block level read requests , block level write requests , file level data read requests , file level data write requests , directory structure commands , and database manipulation commands .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
KR20000031303A

Filed: 1998-11-05     Issued: 2000-06-05

인터네트 전자우편 메시지의 기밀성 유지 방법

(Original Assignee) 정선종; 한국전자통신연구원     

박윤경
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message (수신자가) to a datacenter queue at least partially stored at a second server (클라이언트) ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
KR20000031303A
CLAIM 1
송신자의 보안 정책에 따라 수신된 메시지의 처리를 제어할 수 있도록 , 메시지 관리 등급을 포함하고 있는 송신 메시지를 작성하여 서버로 해당 메시지를 전송하는 제 1과정 ;
및 상기 제 1과정으로부터 송신된 메시지를 송신자의 요청에 따라 수신자가 (first message) 요청한 작업의 수행 여부를 결정 , 관리하는 제 2과정으로 이루어진 것을 특징으로 하는 인터네트 전자우편 시스템에서의 송신자 요청에 의한 메시지의 기밀성 유지 방법 .

KR20000031303A
CLAIM 2
제 1 항에 있어서 , 상기 제 2과정은 , 클라이언트 (second server) 가 메시지 수신자가 지시한 명령의 종류와 송신자가 메시지 작성시 지정한 메시지 관리 등급 코드를 비교하여 수신자 요구가 송신자에 의해 금지되었는지 여부를 확인하는 제 1단계 ;
송신자가 메시지의 자동 삭제를 요청한 경우 메시지를 삭제하며 , 수신자가 메시지의 인쇄 , 복사 , 저장 또는 포워딩(forwarding)을 요청한 경우 해당 명령이 송신자에 의하여 금지되었는지 여부를 확인하여 금지되지 않은 경우에 한하여 명령을 수행하는 제 2단계 ;
및 해당 명령을 수행할 수 없는 경우 또는 메시지를 삭제한 경우 , 해당 이벤트를 발생시키는 제 3단계로 이루어지는 것을 특징으로 하는 인터네트 전자우편 시스템의 송신자 요청에 의한 메시지 기밀성 유지 방법 .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message (수신자가) sent by the producer worker before storing the first message .
KR20000031303A
CLAIM 1
송신자의 보안 정책에 따라 수신된 메시지의 처리를 제어할 수 있도록 , 메시지 관리 등급을 포함하고 있는 송신 메시지를 작성하여 서버로 해당 메시지를 전송하는 제 1과정 ;
및 상기 제 1과정으로부터 송신된 메시지를 송신자의 요청에 따라 수신자가 (first message) 요청한 작업의 수행 여부를 결정 , 관리하는 제 2과정으로 이루어진 것을 특징으로 하는 인터네트 전자우편 시스템에서의 송신자 요청에 의한 메시지의 기밀성 유지 방법 .

US8954993B2
CLAIM 5
. The method of claim 1 , wherein modifying the stored first message (수신자가) includes deleting the first message .
KR20000031303A
CLAIM 1
송신자의 보안 정책에 따라 수신된 메시지의 처리를 제어할 수 있도록 , 메시지 관리 등급을 포함하고 있는 송신 메시지를 작성하여 서버로 해당 메시지를 전송하는 제 1과정 ;
및 상기 제 1과정으로부터 송신된 메시지를 송신자의 요청에 따라 수신자가 (first message) 요청한 작업의 수행 여부를 결정 , 관리하는 제 2과정으로 이루어진 것을 특징으로 하는 인터네트 전자우편 시스템에서의 송신자 요청에 의한 메시지의 기밀성 유지 방법 .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message (수신자가) to a datacenter queue at least partially stored at a second server (클라이언트) ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
KR20000031303A
CLAIM 1
송신자의 보안 정책에 따라 수신된 메시지의 처리를 제어할 수 있도록 , 메시지 관리 등급을 포함하고 있는 송신 메시지를 작성하여 서버로 해당 메시지를 전송하는 제 1과정 ;
및 상기 제 1과정으로부터 송신된 메시지를 송신자의 요청에 따라 수신자가 (first message) 요청한 작업의 수행 여부를 결정 , 관리하는 제 2과정으로 이루어진 것을 특징으로 하는 인터네트 전자우편 시스템에서의 송신자 요청에 의한 메시지의 기밀성 유지 방법 .

KR20000031303A
CLAIM 2
제 1 항에 있어서 , 상기 제 2과정은 , 클라이언트 (second server) 가 메시지 수신자가 지시한 명령의 종류와 송신자가 메시지 작성시 지정한 메시지 관리 등급 코드를 비교하여 수신자 요구가 송신자에 의해 금지되었는지 여부를 확인하는 제 1단계 ;
송신자가 메시지의 자동 삭제를 요청한 경우 메시지를 삭제하며 , 수신자가 메시지의 인쇄 , 복사 , 저장 또는 포워딩(forwarding)을 요청한 경우 해당 명령이 송신자에 의하여 금지되었는지 여부를 확인하여 금지되지 않은 경우에 한하여 명령을 수행하는 제 2단계 ;
및 해당 명령을 수행할 수 없는 경우 또는 메시지를 삭제한 경우 , 해당 이벤트를 발생시키는 제 3단계로 이루어지는 것을 특징으로 하는 인터네트 전자우편 시스템의 송신자 요청에 의한 메시지 기밀성 유지 방법 .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message (수신자가) to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
KR20000031303A
CLAIM 1
송신자의 보안 정책에 따라 수신된 메시지의 처리를 제어할 수 있도록 , 메시지 관리 등급을 포함하고 있는 송신 메시지를 작성하여 서버로 해당 메시지를 전송하는 제 1과정 ;
및 상기 제 1과정으로부터 송신된 메시지를 송신자의 요청에 따라 수신자가 (first message) 요청한 작업의 수행 여부를 결정 , 관리하는 제 2과정으로 이루어진 것을 특징으로 하는 인터네트 전자우편 시스템에서의 송신자 요청에 의한 메시지의 기밀성 유지 방법 .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message (수신자가) sent by the producer worker before storing the first message .
KR20000031303A
CLAIM 1
송신자의 보안 정책에 따라 수신된 메시지의 처리를 제어할 수 있도록 , 메시지 관리 등급을 포함하고 있는 송신 메시지를 작성하여 서버로 해당 메시지를 전송하는 제 1과정 ;
및 상기 제 1과정으로부터 송신된 메시지를 송신자의 요청에 따라 수신자가 (first message) 요청한 작업의 수행 여부를 결정 , 관리하는 제 2과정으로 이루어진 것을 특징으로 하는 인터네트 전자우편 시스템에서의 송신자 요청에 의한 메시지의 기밀성 유지 방법 .

US8954993B2
CLAIM 21
. The datacenter of claim 14 , wherein the controller is further configured to modify the stored first message (수신자가) by deleting the first message .
KR20000031303A
CLAIM 1
송신자의 보안 정책에 따라 수신된 메시지의 처리를 제어할 수 있도록 , 메시지 관리 등급을 포함하고 있는 송신 메시지를 작성하여 서버로 해당 메시지를 전송하는 제 1과정 ;
및 상기 제 1과정으로부터 송신된 메시지를 송신자의 요청에 따라 수신자가 (first message) 요청한 작업의 수행 여부를 결정 , 관리하는 제 2과정으로 이루어진 것을 특징으로 하는 인터네트 전자우편 시스템에서의 송신자 요청에 의한 메시지의 기밀성 유지 방법 .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102930427A

Filed: 2012-11-15     Issued: 2013-02-13

日程管理方法及其移动终端

(Original Assignee) Huaqin Telecom Technology Co Ltd     (Current Assignee) Huaqin Telecom Technology Co Ltd

潘世行
US8954993B2
CLAIM 1
. A method to locally process queue requests (请求信息) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request (包括访问) to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息 (queue requests)

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request (包括访问) from the consumer worker to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request (包括访问) from the consumer worker .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (请求信息) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request (包括访问) to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息 (queue requests)

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request (包括访问) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (包括访问) from the consumer worker .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (请求信息) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request (包括访问) to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息 (queue requests)

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request (包括访问) from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request (包括访问) from the consumer worker .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102891779A

Filed: 2012-09-27     Issued: 2013-01-23

用于ip网络的大规模网络性能测量系统和方法

(Original Assignee) BEIJING WRD TECHNOLOGY Co Ltd     (Current Assignee) BEIJING WRD TECHNOLOGY Co Ltd

徐立人, 丛群
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (结果进行) at a first server (传输协议) sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker (结果进行) at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 4
. 根据权利要求I所述的系统,其特征在于:所述服务器的通信模块网络探针接口与网络探针之间的交互通信使用超文本传输协议 (first server) HTTP,以保证服务质量QoS和便利于灵活配置防火墙:测量过程中,由网络探针定期发起传输控制协议TCP连接,通过域名方式连接服务器,使得该系统能够穿越网关的网络地址转换NAT,进行最大程度的跨网络测量;且因采用域名访问方式,使得服务器能够在域名系统DNS层实现负载均衡,并保证服务器的IP地址能够实现切换与迁移;网络探针使用HTTP POST方式将测量任务和测量结果放入小型数据封装格式的数据包载荷内发送给服务器;再由服务器向网络探针返回HTTP响应的数据包,以避免出现嵌入式网络探针出现内存不足的现象。

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (结果进行) before storing the first message .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (结果进行) and the consumer worker (结果进行) are co-located on a multi-core device at the first server (传输协议) .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 4
. 根据权利要求I所述的系统,其特征在于:所述服务器的通信模块网络探针接口与网络探针之间的交互通信使用超文本传输协议 (first server) HTTP,以保证服务质量QoS和便利于灵活配置防火墙:测量过程中,由网络探针定期发起传输控制协议TCP连接,通过域名方式连接服务器,使得该系统能够穿越网关的网络地址转换NAT,进行最大程度的跨网络测量;且因采用域名访问方式,使得服务器能够在域名系统DNS层实现负载均衡,并保证服务器的IP地址能够实现切换与迁移;网络探针使用HTTP POST方式将测量任务和测量结果放入小型数据封装格式的数据包载荷内发送给服务器;再由服务器向网络探针返回HTTP响应的数据包,以避免出现嵌入式网络探针出现内存不足的现象。

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (结果进行) and the consumer worker (结果进行) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker (结果进行) to the datacenter queue ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue is configured to hide a requested message upon receiving the message request from the consumer worker (结果进行) .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 8
. A virtual machine manager (周期时间) (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (结果进行) at a first server (传输协议) , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker (结果进行) at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 4
. 根据权利要求I所述的系统,其特征在于:所述服务器的通信模块网络探针接口与网络探针之间的交互通信使用超文本传输协议 (first server) HTTP,以保证服务质量QoS和便利于灵活配置防火墙:测量过程中,由网络探针定期发起传输控制协议TCP连接,通过域名方式连接服务器,使得该系统能够穿越网关的网络地址转换NAT,进行最大程度的跨网络测量;且因采用域名访问方式,使得服务器能够在域名系统DNS层实现负载均衡,并保证服务器的IP地址能够实现切换与迁移;网络探针使用HTTP POST方式将测量任务和测量结果放入小型数据封装格式的数据包载荷内发送给服务器;再由服务器向网络探针返回HTTP响应的数据包,以避免出现嵌入式网络探针出现内存不足的现象。

CN102891779A
CLAIM 8
. 根据权利要求6所述的测量方法,其特征在于:所述步骤2包括下列操作内容: (21)网络探针向DNS服务器查询得到服务器的IP地址后,向服务器发起TCP连接,以使双方建立连接; (22)连接建立后,网络探针将上一次测量结果包含于HTTP数据包中,再发送给服务器;所述测量结果数据使用资料交换语言JSON进行格式化,并置于HTTP POST报文载荷中; (23)服务器接收到网络探针的测试结果后,从数据库中该网络探针的任务队列中取出下一周期的测量任务,再使用JSON对其进行格式化后,置于HTTP返回报文中,发送给网络探针; (24)网络探针接收到测量任务的报文后,从JSON格式中解析数据,再存入任务队列,并结束本次TCP连接; (25)网络探针根据接收到的测量任务进行主动网络测量操作;直到设置的周期时间 (second virtual machine, virtual machine manager) 达到后,返回执行步骤(22),网络探针与服务器交互通信,将本次测量结果包含于HTTP数据包中,发送给服务器;然后开始新的测量周期。

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (结果进行) and the consumer worker (结果进行) are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker (结果进行) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (结果进行) .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (周期时间) (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (结果进行) that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker (结果进行) that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 8
. 根据权利要求6所述的测量方法,其特征在于:所述步骤2包括下列操作内容: (21)网络探针向DNS服务器查询得到服务器的IP地址后,向服务器发起TCP连接,以使双方建立连接; (22)连接建立后,网络探针将上一次测量结果包含于HTTP数据包中,再发送给服务器;所述测量结果数据使用资料交换语言JSON进行格式化,并置于HTTP POST报文载荷中; (23)服务器接收到网络探针的测试结果后,从数据库中该网络探针的任务队列中取出下一周期的测量任务,再使用JSON对其进行格式化后,置于HTTP返回报文中,发送给网络探针; (24)网络探针接收到测量任务的报文后,从JSON格式中解析数据,再存入任务队列,并结束本次TCP连接; (25)网络探针根据接收到的测量任务进行主动网络测量操作;直到设置的周期时间 (second virtual machine, virtual machine manager) 达到后,返回执行步骤(22),网络探针与服务器交互通信,将本次测量结果包含于HTTP数据包中,发送给服务器;然后开始新的测量周期。

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (结果进行) before storing the first message .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (结果进行) and the consumer worker (结果进行) are co-located on a multi-core device at the first datacenter location .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker (结果进行) to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue is configured to hide the requested message upon receiving the message request from the consumer worker (结果进行) .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102855148A

Filed: 2012-08-02     Issued: 2013-01-02

一种基于Android的开机管理方法

(Original Assignee) Guangdong Oppo Mobile Telecommunications Corp Ltd     (Current Assignee) Guangdong Oppo Mobile Telecommunications Corp Ltd

胡展鸿, 龙振海
US8954993B2
CLAIM 1
. A method to locally process queue requests (对应活动) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (对应活动) at least partially stored at a second server ;

storing the first message in a queue cache (对应活动) at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue (对应活动) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (对应活动) is configured to hide a requested message upon receiving the message request from the consumer worker .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (对应活动) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (对应活动) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (对应活动) request .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (对应活动) request .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue (对应活动) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (对应活动) is configured to hide the requested message upon receiving the message request from the consumer worker .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (对应活动) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (数据库) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (对应活动) at least partially stored at a first datacenter location (数据库) ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (对应活动) at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102855148A
CLAIM 3
. 根据权利要求I所述基于Android的开机管理方法,其特征在于,所述步骤105)中配置或定义保存在系统或用户定义的数据库 (datacenter controller, first datacenter location) 中。

CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (对应活动) includes one of a copy and a partial copy of the datacenter queue (对应活动) .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (对应活动) request .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (对应活动) request .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter location (数据库) .
CN102855148A
CLAIM 3
. 根据权利要求I所述基于Android的开机管理方法,其特征在于,所述步骤105)中配置或定义保存在系统或用户定义的数据库 (datacenter controller, first datacenter location) 中。

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue (对应活动) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (对应活动) is configured to hide the requested message upon receiving the message request from the consumer worker .
CN102855148A
CLAIM 6
. 根据权利要求1-5任一项所述基于Android的开机管理方法,其特征在于,对应活动 (queue requests, datacenter queue, queue cache) 管理服务是ActivityManagerService , 对应包管理服务是PackageManagerService , 对应系统初始化完成广播是ACTI0N B00T C0MPLETE。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102740228A

Filed: 2012-06-21     Issued: 2012-10-17

一种位置信息共享方法、装置及系统

(Original Assignee) Beijing Xiaomi Technology Co Ltd     (Current Assignee) Beijing Xiaomi Technology Co Ltd

底浩, 石新明
US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location (标识的) ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (在确定) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102740228A
CLAIM 2
. 如权利要求I所述的方法,其特征在于,所述确定所述第二用户的位置信息允许共享,具体包括: 向所述第二用户发送携带第一用户标识的 (first datacenter location) 共享请求,并接收到所述第二用户返回的允许共享的反馈信息;或者 根据预先设定的第一用户和第二用户的共享权限,确定所述所述第二用户的位置信息允许所述第一用户共享。

CN102740228A
CLAIM 3
. 如权利要求I所述的方法,其特征在于,在确定 (second datacenter, second datacenter location) 所述第二用户的位置信息允许共享前,还包括: 根据所述位置信息共享请求中携带的第二用户的标识信息,确定所述第二用户已登录;或者 根据所述位置信息共享请求中携带的第二用户的标识信息,确定所述第二用户未登录,向所述第二用户发送用于提示所述第二用户登录的提示信息。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter location (标识的) .
CN102740228A
CLAIM 2
. 如权利要求I所述的方法,其特征在于,所述确定所述第二用户的位置信息允许共享,具体包括: 向所述第二用户发送携带第一用户标识的 (first datacenter location) 共享请求,并接收到所述第二用户返回的允许共享的反馈信息;或者 根据预先设定的第一用户和第二用户的共享权限,确定所述所述第二用户的位置信息允许所述第一用户共享。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102591721A

Filed: 2011-12-30     Issued: 2012-07-18

一种分配线程执行任务的方法和系统

(Original Assignee) Beijing Feinno Communication Technology Co Ltd     (Current Assignee) Beijing Feinno Communication Technology Co Ltd

李江林
US8954993B2
CLAIM 8
. A virtual machine manager (单位时) (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102591721A
CLAIM 8
. 根据权利要求7所述的分配线程执行任务的方法,其特征在于,所述步骤71进一步包括:步骤81A,依据任务队列中排序最后的任务的加入时间、当前时间、线程单位时 (virtual machine manager) 间处理任务的平均数量、当前任务队列中的任务数量和预设的任务等待时长阈值,确定线程池中需要的空闲线程数量;步骤82A,将所述需要的空闲线程数量和当前占用的线程数量相加得和值,取所述初始线程数量与所述和值中较大值,线程池中当前线程数量与较大值的差值为所述多余线程数量;或者,所述步骤61进一步包括:步骤81B,依据任务队列中排序最后的任务的加入时间、当前时间、线程单位时间处理任务的平均数量、当前任务队列中的任务数量和预设的任务等待时长阈值,确定线程池中需要的空闲线程数量;步骤82B,将所述需要的空闲线程数量和当前占用的线程数量相加得和值,取预设的最大线程数量与所述和值中较小值,较小值与当前线程池中线程数量的差值为所述创建线程的数量。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter (的数量) location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102591721A
CLAIM 6
. 根据权利要求5所述的分配线程执行任务的方法,其特征在于,所述步骤52和步骤53间还包括:步骤61,根据查找结果和当前任务队列中的任务数量确定创建线程的数量 (first datacenter) ;步骤62,如果所述数量不为0,则依据所述数量创建线程,并将创建的线程加入线程池,新创建的线程状态为空闲。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter (的数量) location .
CN102591721A
CLAIM 6
. 根据权利要求5所述的分配线程执行任务的方法,其特征在于,所述步骤52和步骤53间还包括:步骤61,根据查找结果和当前任务队列中的任务数量确定创建线程的数量 (first datacenter) ;步骤62,如果所述数量不为0,则依据所述数量创建线程,并将创建的线程加入线程池,新创建的线程状态为空闲。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102572316A

Filed: 2011-09-30     Issued: 2012-07-11

用于图像信号处理的溢出控制技术

(Original Assignee) Apple Computer Inc     (Current Assignee) Apple Inc

G·科泰, J·E·弗雷德里克森
US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (个目的) .
CN102572316A
CLAIM 10
. 一种图像信号处理系统,包括:输入队列缓冲器,被配置成接收与图像传感器获得的多帧图像数据对应的进入像素, 其中,输入队列缓冲器接收的进入像素被发给图像信号处理系统的多个目的 (datacenter queue request) 地单元中的目标目的地单元;中断请求(IRQ)状态寄存器,被配置成指示多个目的地单元中的至少一个目的地单元的溢出状态的发生;和控制逻辑器,被配置成如下控制从图像传感器到输入队列缓冲器的对进入像素的接收:至少部分根据IRQ寄存器的值检测溢出的出现;当出现溢出时,识别数字图像传感器正在获取的当前帧;在溢出出现的时候,丢弃数字图像传感器获得的、对应于当前图像帧的进入像素;检测溢出的恢复;和如果在当前图像帧结束之前,发生溢出恢复,那么接收在溢出恢复之后数字图像传感器获得的、与当前图像帧的剩余部分对应的进入像素,把在溢出恢复之后获得的进入像素发给目的地单元,并且对于在出现溢出时丢弃的每个进入像素,向目标目的地单元发送替换像素值。

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (个目的) .
CN102572316A
CLAIM 10
. 一种图像信号处理系统,包括:输入队列缓冲器,被配置成接收与图像传感器获得的多帧图像数据对应的进入像素, 其中,输入队列缓冲器接收的进入像素被发给图像信号处理系统的多个目的 (datacenter queue request) 地单元中的目标目的地单元;中断请求(IRQ)状态寄存器,被配置成指示多个目的地单元中的至少一个目的地单元的溢出状态的发生;和控制逻辑器,被配置成如下控制从图像传感器到输入队列缓冲器的对进入像素的接收:至少部分根据IRQ寄存器的值检测溢出的出现;当出现溢出时,识别数字图像传感器正在获取的当前帧;在溢出出现的时候,丢弃数字图像传感器获得的、对应于当前图像帧的进入像素;检测溢出的恢复;和如果在当前图像帧结束之前,发生溢出恢复,那么接收在溢出恢复之后数字图像传感器获得的、与当前图像帧的剩余部分对应的进入像素,把在溢出恢复之后获得的进入像素发给目的地单元,并且对于在出现溢出时丢弃的每个进入像素,向目标目的地单元发送替换像素值。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter (单元施) location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102572316A
CLAIM 7
. 按照权利要求1所述的方法,其中,确定溢出状态是否存在包括:确定至少一个下游处理单元施 (second datacenter) 加的背压是否传播到输入缓冲器。

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (个目的) .
CN102572316A
CLAIM 10
. 一种图像信号处理系统,包括:输入队列缓冲器,被配置成接收与图像传感器获得的多帧图像数据对应的进入像素, 其中,输入队列缓冲器接收的进入像素被发给图像信号处理系统的多个目的 (datacenter queue request) 地单元中的目标目的地单元;中断请求(IRQ)状态寄存器,被配置成指示多个目的地单元中的至少一个目的地单元的溢出状态的发生;和控制逻辑器,被配置成如下控制从图像传感器到输入队列缓冲器的对进入像素的接收:至少部分根据IRQ寄存器的值检测溢出的出现;当出现溢出时,识别数字图像传感器正在获取的当前帧;在溢出出现的时候,丢弃数字图像传感器获得的、对应于当前图像帧的进入像素;检测溢出的恢复;和如果在当前图像帧结束之前,发生溢出恢复,那么接收在溢出恢复之后数字图像传感器获得的、与当前图像帧的剩余部分对应的进入像素,把在溢出恢复之后获得的进入像素发给目的地单元,并且对于在出现溢出时丢弃的每个进入像素,向目标目的地单元发送替换像素值。

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (个目的) .
CN102572316A
CLAIM 10
. 一种图像信号处理系统,包括:输入队列缓冲器,被配置成接收与图像传感器获得的多帧图像数据对应的进入像素, 其中,输入队列缓冲器接收的进入像素被发给图像信号处理系统的多个目的 (datacenter queue request) 地单元中的目标目的地单元;中断请求(IRQ)状态寄存器,被配置成指示多个目的地单元中的至少一个目的地单元的溢出状态的发生;和控制逻辑器,被配置成如下控制从图像传感器到输入队列缓冲器的对进入像素的接收:至少部分根据IRQ寄存器的值检测溢出的出现;当出现溢出时,识别数字图像传感器正在获取的当前帧;在溢出出现的时候,丢弃数字图像传感器获得的、对应于当前图像帧的进入像素;检测溢出的恢复;和如果在当前图像帧结束之前,发生溢出恢复,那么接收在溢出恢复之后数字图像传感器获得的、与当前图像帧的剩余部分对应的进入像素,把在溢出恢复之后获得的进入像素发给目的地单元,并且对于在出现溢出时丢弃的每个进入像素,向目标目的地单元发送替换像素值。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102902669A

Filed: 2011-07-22     Issued: 2013-01-30

基于互联网系统的分布式信息抓取方法

(Original Assignee) TONGCHENG NETWORK TECHNOLOGY Co Ltd     (Current Assignee) TONGCHENG NETWORK TECHNOLOGY Co Ltd

吴志祥, 张海龙, 马和平, 王专, 吴剑, 郭凤林, 王晓钟, 庞绍进
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (由中央) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue (由中央) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (由中央) is configured to hide a requested message upon receiving the message request from the consumer worker .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (由中央) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue (由中央) request .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (由中央) request .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module is further configured to : intercept the message request from the consumer worker to the datacenter queue (由中央) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (由中央) is configured to hide the requested message upon receiving the message request from the consumer worker .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (数据库) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (由中央) at least partially stored at a first datacenter location (数据库) ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库 (datacenter controller, first datacenter location) 中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (由中央) .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (由中央) request .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (由中央) request .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter location (数据库) .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库 (datacenter controller, first datacenter location) 中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue (由中央) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (由中央) is configured to hide the requested message upon receiving the message request from the consumer worker .
CN102902669A
CLAIM 1
. 基于互联网系统的分布式信息抓取方法,将爬虫分布在两台或是两台以上的主机上,按照定制的管理机制同时负责抓取网络上的信息,由中央 (datacenter queue) 主机控制各抓取机器的抓取方向,然后将各抓取机器得到的数据进行整理汇总,形成有用的信息或是数据,放到索引库或是数据库中,其 特征在于:所述的爬虫在同一个局域网里运行,通过高速网络连接相互通信;所述的爬虫通过同一个网络去访问外部互联网,下载网页;所有的网络负载都集中在爬虫所在的那个局域网的出口上;所述的爬虫工作方式包括主从模式、自治模式与混合模式。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102479108A

Filed: 2011-06-21     Issued: 2012-05-30

一种多应用进程的嵌入式系统终端资源管理系统及方法

(Original Assignee) Institute of Acoustics of CAS     (Current Assignee) Institute of Acoustics of CAS

孙鹏, 王海威, 张辉, 邓峰, 林军
US8954993B2
CLAIM 8
. A virtual machine manager (终端的图像) (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage (使用状态, 的使用) detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态 (queue usage) ;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

CN102479108A
CLAIM 9
. 一种多应用进程的嵌入式系统终端资源管理方法,该方法根据应用类型和用户使用应用的统计规律建立应用进程的动态优先级实现终端终端资源的竞争调度,所述的方法包含:建立应用进程的动态优先级的步骤,该步骤用于当多应用同时运行时,根据应用类型和用户使用应用的统计规律建立和调整应用进程的动态优先级;终端资源调度的步骤,当所述应用进程优先级发生变化时,进行终端资源竞争调度过程重新对终端资源进行竞争调度,优先保证高优先级应用的可靠运行;其中,所述的终端资源调度的步骤还包含:当系统中运行的应用较多而导致某种终端终端资源过载或冲突时或当有新应用启动或者有应用退出嵌入式系统时进行终端资源竞争调度过程,重新对所述终端终端资源进行竞争调度;所述的终端资源具体包含:终端的图像 (virtual machine manager) 资源和非图像资源。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage (使用状态, 的使用) based on at least one observed datacenter queue request .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态 (queue usage) ;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage (使用状态, 的使用) detector module is further configured to observe the at least one observed datacenter queue request .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态 (queue usage) ;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter (的数量, 最大数) location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102479108A
CLAIM 6
. 根据权利要求2所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述的应用进程优先级列表中的信息包括:应用进程的ID,优先级,应用需要的各种终端资源的最大数 (first datacenter) 量和应用进程在当前屏幕中的窗口位置。

CN102479108A
CLAIM 13
. 根据权利要求9所述的多应用进程的嵌入式系统终端资源管理方法,其特征在于, 所述终端资源调度步骤进一步包含:(300)系统启动时,初始化一个终端资源状态列表,用于记录系统终端资源的状态信息,包括最大数量、已用数量、剩余可用数量和阈值;(301)系统运行时,如果有若干个应用进程同时申请终端资源A时,依据优先级列表为其中一个具有最高优先级的应用进程请求访问终端资源A ;(302)检查终端资源A的状态,如果为应用进程分配其请求的终端资源数量后,终端资源剩余可用数量大于阈值,转入(303),否则,转入(305);(303)将终端资源A按请求的数量 (first datacenter) 分配给应用进程使用,并终端资源更新终端资源A的状态信息,增加关于应用进程的占用信息,转入(304);(304)应用进程完成终端资源访问,终端资源申请释放终端资源A,终端资源收回终端资源A,并终端资源更新终端资源A的状态信息,删除关于应用进程的占用信息;转入 (306);(305)检测到终端资源过载,通终端资源进行终端资源规划调度;(306)终端资源访问流程结束,依据优先级列表为次优先级的应用进程重新进行终端资源调度,转入步骤(302)。

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage (使用状态, 的使用) based on at least one observed datacenter queue request .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态 (queue usage) ;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter (的数量, 最大数) location .
CN102479108A
CLAIM 6
. 根据权利要求2所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述的应用进程优先级列表中的信息包括:应用进程的ID,优先级,应用需要的各种终端资源的最大数 (first datacenter) 量和应用进程在当前屏幕中的窗口位置。

CN102479108A
CLAIM 13
. 根据权利要求9所述的多应用进程的嵌入式系统终端资源管理方法,其特征在于, 所述终端资源调度步骤进一步包含:(300)系统启动时,初始化一个终端资源状态列表,用于记录系统终端资源的状态信息,包括最大数量、已用数量、剩余可用数量和阈值;(301)系统运行时,如果有若干个应用进程同时申请终端资源A时,依据优先级列表为其中一个具有最高优先级的应用进程请求访问终端资源A ;(302)检查终端资源A的状态,如果为应用进程分配其请求的终端资源数量后,终端资源剩余可用数量大于阈值,转入(303),否则,转入(305);(303)将终端资源A按请求的数量 (first datacenter) 分配给应用进程使用,并终端资源更新终端资源A的状态信息,增加关于应用进程的占用信息,转入(304);(304)应用进程完成终端资源访问,终端资源申请释放终端资源A,终端资源收回终端资源A,并终端资源更新终端资源A的状态信息,删除关于应用进程的占用信息;转入 (306);(305)检测到终端资源过载,通终端资源进行终端资源规划调度;(306)终端资源访问流程结束,依据优先级列表为次优先级的应用进程重新进行终端资源调度,转入步骤(302)。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102741843A

Filed: 2011-03-22     Issued: 2012-10-17

从数据库中读取数据的方法及装置

(Original Assignee) Qingdao Hisense Media Network Technology Co Ltd     (Current Assignee) Juhaokan Technology Co Ltd

王震
US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module (获取模块) configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102741843A
CLAIM 11
. 根据权利要求8所述的装置,其特征在于,所述写入单元包括: 第一获取模块 (processing module) ,用于当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器获取所述数据表的更新数据信息; 第二获取模块,用于根据所述数据表的标识查询所述缓存节点数据表,获取缓存所述数据表的缓存节点标识; 写入模块,用于将所述第一获取模块获取的所述更新数据信息和所述第二获取模块获取的所述缓存节点标识写入到所述数据表对应的消息队列中。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module (获取模块) is further configured to build a table of queue usage based on at least one observed datacenter queue request .
CN102741843A
CLAIM 11
. 根据权利要求8所述的装置,其特征在于,所述写入单元包括: 第一获取模块 (processing module) ,用于当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器获取所述数据表的更新数据信息; 第二获取模块,用于根据所述数据表的标识查询所述缓存节点数据表,获取缓存所述数据表的缓存节点标识; 写入模块,用于将所述第一获取模块获取的所述更新数据信息和所述第二获取模块获取的所述缓存节点标识写入到所述数据表对应的消息队列中。

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module (获取模块) is further configured to : intercept the message request from the consumer worker to the datacenter queue ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102741843A
CLAIM 11
. 根据权利要求8所述的装置,其特征在于,所述写入单元包括: 第一获取模块 (processing module) ,用于当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器获取所述数据表的更新数据信息; 第二获取模块,用于根据所述数据表的标识查询所述缓存节点数据表,获取缓存所述数据表的缓存节点标识; 写入模块,用于将所述第一获取模块获取的所述更新数据信息和所述第二获取模块获取的所述缓存节点标识写入到所述数据表对应的消息队列中。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location (标识的) ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102741843A
CLAIM 5
. 根据权利要求4所述的方法,其特征在于,所述更新数据的数据表对应的触发器将所述更新数据信息写入到所述数据表对应的消息队列中,包括: 确定所述更新数据信息写入到消息队列中的形式; 若所述更新数据信息以更新数据记录标识的 (first datacenter location) 形式写入消息队列中,则所述更新数据的数据表对应的触发器读取所述数据表更新数据记录的标识,并将所述更新数据信息以更新数据记录标识的形式写入到所述数据表对应的消息队列中; 若所述更新数据信息以更新数据的形式写入消息队列中,则所述更新数据的数据表对应的触发器读取所述数据表更新数据,并将所述更新数据信息以更新数据的形式直接写入到所述数据表对应的消息队列中。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter location (标识的) .
CN102741843A
CLAIM 5
. 根据权利要求4所述的方法,其特征在于,所述更新数据的数据表对应的触发器将所述更新数据信息写入到所述数据表对应的消息队列中,包括: 确定所述更新数据信息写入到消息队列中的形式; 若所述更新数据信息以更新数据记录标识的 (first datacenter location) 形式写入消息队列中,则所述更新数据的数据表对应的触发器读取所述数据表更新数据记录的标识,并将所述更新数据信息以更新数据记录标识的形式写入到所述数据表对应的消息队列中; 若所述更新数据信息以更新数据的形式写入消息队列中,则所述更新数据的数据表对应的触发器读取所述数据表更新数据,并将所述更新数据信息以更新数据的形式直接写入到所述数据表对应的消息队列中。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102713847A

Filed: 2010-12-14     Issued: 2012-10-03

处理器内核的监管程序隔离

(Original Assignee) Advanced Micro Devices Inc     (Current Assignee) Advanced Micro Devices Inc

托马斯·R·沃勒, 帕特里克·卡名斯基, 埃里克·博林, 基思·A·洛韦里, 本杰明·C·谢列布林
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue (一个队列) at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 6
. The method of claim 1 , further comprising : intercepting the message request from the consumer worker to the datacenter queue (一个队列) ;

forwarding the message request to the datacenter queue if a first criterion is met ;

and refraining from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 7
. The method of claim 6 , wherein the first criterion includes whether the datacenter queue (一个队列) is configured to hide a requested message upon receiving the message request from the consumer worker .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue (一个队列) at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module (少一个计算) configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

CN102713847A
CLAIM 13
. —种编码在至少一个计算 (processing module) 机可读介质中的计算机程序产品,所述计算机程序产品包括: 一个或多个功能序列,其可执行为虚拟机监控程序或者可与虚拟机监控程序结合执行,并且被配置来在所述虚拟机监控程序控制下在包括所述多个内核中的一个或多个内核的第一内核集上执行操作系统序列作为来宾,以及在包括所述多个内核中的一个或多个内核的第二内核集上执行应用的至少一些工作,其中所述第二内核集对所述操作系统不可见。

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module (少一个计算) is further configured to build a table of queue usage based on at least one observed datacenter queue (一个队列) request .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

CN102713847A
CLAIM 13
. —种编码在至少一个计算 (processing module) 机可读介质中的计算机程序产品,所述计算机程序产品包括: 一个或多个功能序列,其可执行为虚拟机监控程序或者可与虚拟机监控程序结合执行,并且被配置来在所述虚拟机监控程序控制下在包括所述多个内核中的一个或多个内核的第一内核集上执行操作系统序列作为来宾,以及在包括所述多个内核中的一个或多个内核的第二内核集上执行应用的至少一些工作,其中所述第二内核集对所述操作系统不可见。

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue (一个队列) request .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 12
. The VMM of claim 8 , wherein the processing module (少一个计算) is further configured to : intercept the message request from the consumer worker to the datacenter queue (一个队列) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

CN102713847A
CLAIM 13
. —种编码在至少一个计算 (processing module) 机可读介质中的计算机程序产品,所述计算机程序产品包括: 一个或多个功能序列,其可执行为虚拟机监控程序或者可与虚拟机监控程序结合执行,并且被配置来在所述虚拟机监控程序控制下在包括所述多个内核中的一个或多个内核的第一内核集上执行操作系统序列作为来宾,以及在包括所述多个内核中的一个或多个内核的第二内核集上执行应用的至少一些工作,其中所述第二内核集对所述操作系统不可见。

US8954993B2
CLAIM 13
. The VMM of claim 12 , wherein the first criterion includes whether the datacenter queue (一个队列) is configured to hide the requested message upon receiving the message request from the consumer worker .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller (加速器) configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue (一个队列) at least partially stored at a first datacenter (一个队列) location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN102713847A
CLAIM 4
. 根据权利要求I所述的方法,其中所述应用在所述操作系统上执行,并且所述第二内核子集中的内核被配置为计算加速器 (datacenter controller) ,且其中所述应用经由所述虚拟机监控程序间接访问所述第二内核子集。

CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue (一个队列) .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue (一个队列) request .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue (一个队列) request .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker and the consumer worker are co-located on a multi-core device at the first datacenter (一个队列) location .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 22
. The datacenter of claim 14 , wherein the controller is further configured to : intercept the message request from the consumer worker to the datacenter queue (一个队列) ;

forward the message request to the datacenter queue if a first criterion is met ;

and refrain from forwarding the message request to the datacenter queue if the first criterion is not met .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。

US8954993B2
CLAIM 23
. The datacenter of claim 22 , wherein the first criterion includes whether the datacenter queue (一个队列) is configured to hide the requested message upon receiving the message request from the consumer worker .
CN102713847A
CLAIM 10
. 根据权利要求7所述的装置,还包括: 用于所述应用与所述第二内核集之间通信的至少一个队列 (first datacenter, datacenter queue) ,其中所述至少一个队列包括: 命令队列,其被配置来从加速计算驱动程序通信至所述监管程序; 响应队列,其被配置来从所述监管程序通信至所述加速计算驱动程序; 错误队列,其被配置来从所述监管程序通信至所述加速计算驱动程序;以及工作队列,其被配置来从加速计算应用程序接口通信至所述监管程序。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
KR20120111734A

Filed: 2010-12-14     Issued: 2012-10-10

프로세서 코어들의 하이퍼바이저 격리

(Original Assignee) 어드밴스드 마이크로 디바이시즈, 인코포레이티드     

케이스 에이. 로웨리, 에릭 불린, 벤자민 씨. 세레브린, 토마스 알. 울러, 패트? 카민스키
US8954993B2
CLAIM 8
. A virtual machine (virtual machine) manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine (virtual machine) monitor)의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (virtual machine) (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine (virtual machine) monitor)의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US8954993B2
CLAIM 20
. The datacenter of claim 14 , wherein the first and second VMs (hypervisor) are configured to execute on the same physical machine .
KR20120111734A
CLAIM 7
복수의 코어들과 ;
상기 복수의 코어들에 의해 액세스가능한 하나 이상의 매체에 인코딩되는 오퍼레이팅 시스템 소프트웨어와 ;
그리고 상기 복수의 코어들에 의해 액세스가능한 하나 이상의 매체에 인코딩되며 상기 복수의 코어들 중의 하나 이상의 코어 상에서 실행가능한 하이퍼바이저 소프트웨어(hypervisor (second VMs) software)를 포함하여 구성되고 , 상기 하이퍼바이저 소프트웨어는 , 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제1의 코어 세트 상에서 상기 오프레이팅 시스템 소프트웨어의 게스트(guest)로서의 실행을 제어하도록 실행가능하고 그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 세트 상에서 애플리케이션의 적어도 일부 작업을 실행하도록 실행가능하며 , 상기 제2의 코어 세트는 오퍼레이팅 시스템에 대해 가시적이지 않은 것을 특징으로 하는 장치 .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN101923491A

Filed: 2010-08-11     Issued: 2010-12-22

多核环境下线程组地址空间调度和切换线程的方法

(Original Assignee) Shanghai Jiaotong University     (Current Assignee) Shanghai Jiaotong University

过敏意, 李阳, 王稳寅, 丁孟为, 杨蓝麒, 伍倩, 沈耀
US8954993B2
CLAIM 1
. A method to locally process queue requests (包含的) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache (当线程) at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的 (queue requests) 线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程 (queue cache) 被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests (包含的) from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的 (queue requests) 线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests (包含的) from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache (当线程) at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的 (queue requests) 线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程 (queue cache) 被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US8954993B2
CLAIM 16
. The datacenter of claim 14 , wherein the queue cache (当线程) includes one of a copy and a partial copy of the datacenter queue .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程 (queue cache) 被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
WO2009032493A1

Filed: 2008-08-13     Issued: 2009-03-12

Dynamic market data filtering

(Original Assignee) Chicago Mercantile Exchange, Inc.     

Paul J. Callaway, Dennis M. Genetski, Adrien Gracia, Vijay Menon, James Krause
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (data message) at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (data message) before storing the first message .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (data message) and the consumer worker are co-located on a multi-core device at the first server .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (data message) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (data message) at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 9
. The VMM of claim 8 , wherein the processing module is further configured to build a table of queue usage based on at least one observed datacenter queue request (buffering data) .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data messages ;
buffering data (datacenter queue request) messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 10
. The VMM of claim 9 , wherein the queue usage detector module is further configured to observe the at least one observed datacenter queue request (buffering data) .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data messages ;
buffering data (datacenter queue request) messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (data message) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (data message) that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (data message) before storing the first message .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 17
. The datacenter of claim 14 , wherein the controller is further configured to build a table of queue usage based on at least one observed datacenter queue request (buffering data) .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data messages ;
buffering data (datacenter queue request) messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 18
. The datacenter of claim 17 , wherein the controller is further configured to observe the at least one observed datacenter queue request (buffering data) .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data messages ;
buffering data (datacenter queue request) messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (data message) and the consumer worker are co-located on a multi-core device at the first datacenter location .
WO2009032493A1
CLAIM 8
. A method for dynamically filtering streaming data comprising : receiving a stream of consecutive data message (producer worker) s ;
buffering data messages in a data queue when the data streaming rate received at the data receiver exceeds a publishing limitation ;
filtering inclusive messages in the data queue ;
and distributing streaming data including replaced and aggregated messages .




US8954993B2

Filed: 2013-02-28     Issued: 2015-02-10

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
WO2008141900A1

Filed: 2008-04-29     Issued: 2008-11-27

Virtualized storage performance controller

(Original Assignee) International Business Machines Corporation     

Nicholas Michael O`Rourke, Lee Jason Sanders, William James Scales, Barry Douglas Whyte
US8954993B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (performance data) at a first server sending a first message to a datacenter queue at least partially stored at a second server ;

storing the first message in a queue cache at the first server , wherein the queue cache includes one of a copy and a partial copy of the datacenter queue ;

detecting a consumer worker at the first server sending a message request to the datacenter queue ;

providing the stored first message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the stored first message in response to receiving the signal .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 2
. The method of claim 1 , further comprising intercepting the first message sent by the producer worker (performance data) before storing the first message .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 3
. The method of claim 1 , wherein the producer worker (performance data) and the consumer worker are co-located on a multi-core device at the first server .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 4
. The method of claim 1 , wherein the producer worker (performance data) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 8
. A virtual machine manager (VMM) to locally process queue requests from co-located workers in a datacenter , the VMM comprising : a queue usage detector module configured to : detect a producer worker (performance data) at a first server , wherein the producer worker sends a first message to a datacenter queue at least partially stored at a second server ;

and detect a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue , and wherein the producer worker and the consumer worker are co-located on a multi-core device at the first server ;

and a processing module configured to : intercept the first message sent by the producer worker ;

store the first message at the first server ;

provide the stored first message to the consumer worker in response to the message request ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 11
. The VMM of claim 8 , wherein the producer worker (performance data) and the consumer worker are executed on different virtual machines , the different virtual machines configured to execute on the same physical hardware .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 14
. A cloud-based datacenter configured to locally process queue requests from co-located workers in the datacenter , the datacenter comprising : a first and a second virtual machine (VM) operable to be executed on one or more physical machines ;

and a datacenter controller configured to : detect a producer worker (performance data) that is executed on a first VM and sends a first message to a datacenter queue at least partially stored at a first datacenter location ;

intercept the first message sent by the producer worker before storing the first message ;

store the first message in a queue cache at a second datacenter location different from the first ;

detect a consumer worker that is executed on a second VM and sends a message request to the datacenter queue ;

provide the stored first message to the consumer worker in response to the message request , wherein the first message is stored and provided from within a server to the producer worker and the consumer worker ;

receive a signal from a command channel associated with the datacenter queue ;

and modify the stored first message in response to receiving the signal .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 15
. The datacenter of claim 14 , wherein the controller is further configured to intercept the first message sent by the producer worker (performance data) before storing the first message .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 19
. The datacenter of claim 14 , wherein the producer worker (performance data) and the consumer worker are co-located on a multi-core device at the first datacenter location .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data (producer worker) from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .

US8954993B2
CLAIM 20
. The datacenter of claim 14 , wherein the first and second VMs (performance management) are configured to execute on the same physical machine .
WO2008141900A1
CLAIM 1
. An apparatus for real-time performance management (second VMs) of a virtualized storage system operable in a network having managed physical storage and virtual storage presented by an in-band virtualization controller comprising : a monitoring component operable in communication with the network for acquiring performance data from the managed physical storage and the virtual storage ;
and a cache controller component responsive to the monitoring component for adjusting cache parameters for the virtual storage .