Purpose: Invalidity Analysis


Patent: US9479472B2
Filed: 2013-02-28
Issued: 2016-10-25
Patent Holder: (Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp
Inventor(s): Ezekiel Kruglick

Title: Local message queue processing for co-located workers

Abstract: Technologies are provided for locally processing queue requests from co-located workers. In some examples, information about the usage of remote datacenter queues by co-located workers may be used to determine one or more matched queues. Messages from local workers to a remote datacenter queue classified as a matched queue may be stored locally. Subsequently, local workers that request messages from matched queues may be provided with the locally-stored messages.




Disclaimer: The promise of Apex Standards Pseudo Claim Charting (PCC) [ Request Form ] is not to replace expert opinion but to provide due diligence and transparency prior to high precision charting. PCC conducts aggressive mapping (based on Broadest Reasonable, Ordinary or Customary Interpretation and Multilingual Translation) between a target patent's claim elements and other documents (potential technical standard specification or prior arts in the same or across different jurisdictions), therefore allowing for a top-down, apriori evaluation, with which, stakeholders can assess standard essentiality (potential strengths) or invalidity (potential weaknesses) quickly and effectively before making complex, high-value decisions. PCC is designed to relieve initial burden of proof via an exhaustive listing of contextual semantic mapping as potential building blocks towards a litigation-ready work product. Stakeholders may then use the mapping to modify upon shortlisted PCC or identify other relevant materials in order to formulate strategy and achieve further purposes.

Click on references to view corresponding claim charts.


Non-Patent Literature        WIPO Prior Art        EP Prior Art        US Prior Art        CN Prior Art        JP Prior Art        KR Prior Art       
 
  Independent Claim

GroundReferenceOwner of the ReferenceTitleSemantic MappingBasisAnticipationChallenged Claims
1234567891011121314151617181920
1

USENIX Association Proceedings Of The 2006 USENIX Annual Technical Conference. : 29-42 2006

(Liu, 2006)
International Business Machines CorporationHigh Performance VMM-bypass I/O In Virtual Machines datacenter queue, VMM application virtual machine monitor

readable storage, delete command I/O device

XXXXXXXXXXXXXXXXXX
2

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS. 17 (6): 1127-1144 JUN 1999

(Liebeherr, 1999)
University of Virginia (UVA), International Business Machines CorporationPriority Queue Schedulers With Approximate Sorting In Output-buffered Switches matching producer, determining matching producer worker computational overhead

queue requests other time

producer worker information time stamp

XXXXXXXX
3

PROCEEDINGS OF THE 3RD USENIX WINDOWS NT SYMPOSIUM. : 21-30 1999

(Forin, 1999)
No AffiliationHigh-performance Distributed Objects Over System Area Networks command channel full advantage

computing device speed networks

XXXXXXXXXXXX
4

PROCEEDINGS OF THE SECOND SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDI 96). : 245-259 1996

(Buzzard, 1996)
Hewlett Packard LabsAn Implementation Of The Hamlyn Sender-managed Interface Architecture queue user table memory management

readable storage based memory

XXXXX
5

US20130014114A1

(Akihito Nagata, 2013)
(Original Assignee) Sony Interactive Entertainment Inc     

(Current Assignee)
Sony Interactive Entertainment Inc
Information processing apparatus and method for carrying out multi-thread processing readable storage device identity information

identify one configured to store

producer worker, consumer worker storage location, one processor

queue cache write access

datacenter queue, queue user table push module

35 U.S.C. 103(a)

35 U.S.C. 102(e)
describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches assigning the thread priority to the available thread based on a priority of the task distributed to the…

teaches a method of delaying the execution of thread groups…
XXXXXXXXXXXXXXXXXXXX
6

US20130044749A1

(Mark Eisner, 2013)
(Original Assignee) FireStar Software Inc     

(Current Assignee)
FireStar Software Inc
System and method for exchanging information among exchange applications consumer worker including information

identify one configured to store

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses wherein a message contains metadata and is processed…

discloses determining from the request a specified template paragraphs…

discloses that patient data may be converted from a proprietary format to a common format and from a common format to a…

discloses a method of asynchronously communicating with a web application comprising receiving one or more messages from…
XXXXXXXXXXXXX
7

US20120233273A1

(James Robert Miner, 2012)
(Original Assignee) James Robert Miner; Jason Paul Oettinger     Systems and methods for message collection delete command delete command

identifying one second data

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses a communication system and method for pushing electronic messages to a wireless portable device arrival…

discloses assigning a unique identi er to the event information see col…

teaches wherein the graphical user interface may be operable to display a noti cation alert generated by the…

discloses determining modifications made to a shared folder located on a first computer system from a second computer…
XXXX
8

CN102713852A

(张卫国, 2012)
(Original Assignee) Huawei Technologies Co Ltd     

(Current Assignee)
Huawei Technologies Co Ltd
一种多核处理器系统 datacenter queue information 连接一

readable storage device 的一组

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XX
9

CN102668516A

(邓金波, 2012)
(Original Assignee) Huawei Technologies Co Ltd     

(Current Assignee)
Huawei Technologies Co Ltd
一种云消息服务中实现消息传递的方法和装置 second message 消息传递

queue requests 包含的

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses disclose the priority scheme includes an indication received from the assistant that particular ones of the…

teaches the additional features wherein said access manager is responsive to said sourcedestination policy specified…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

discloses that a rule can be used as a template for other rules in order to create a new but similar rule column…
XXXX
10

US20120066177A1

(Scott Swanburg, 2012)
(Original Assignee) AT&T Mobility II LLC     

(Current Assignee)
AT&T Mobility II LLC
Systems and Methods for Remote Deletion of Contact Information datacenter queue, datacenter queue information desktop computer

readable storage readable storage

message request message request

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the status report transmitted from the mobile unit to the user interface unit according to one of SMTP POP…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches a communication system comprising A mobile unit having a processor a memory and a wireless modem for…
XXXXXXXXXXXXXXX
11

US20130036427A1

(Han Chen, 2013)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Message queuing with flexible consistency options readable storage readable storage

identifying one identifying one

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XX
12

US20130007183A1

(James Christopher Sorenson, 2013)
(Original Assignee) Amazon Technologies Inc     

(Current Assignee)
Amazon Technologies Inc
Methods And Apparatus For Remotely Updating Executing Processes computing device computing device

producer worker one processor

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the use of free pool of storage resources free pool of media units in storage area network and network…

discloses wherein said storage is arranged to receive an instruction for assigning the secondary volume for creating a…

teaches a switch being provided to interconnect the communication channel so that communication can be made between…

teaches synchronizing endpoints eg all computers to maintain integrity and coherency of data…
XXXXXXXXXXXXXXXX
13

US20120117167A1

(Aran Sadja, 2012)
(Original Assignee) Sony Corp     

(Current Assignee)
Sony Corp
System and method for providing recommendations to a user in a viewing social network datacenter queue more servers

delete command more user

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses a machine configured to of transmitting at least one indicator of an encapsulation of at least one skill…

teaches determining a communications strength between each of the multiple identities associated with the user and…

discloses in one embodiment that a fan of a cricket watching a test match broadcast free to air could anticipate a…

discloses sending of external program information such as background information for certain programs including video…
XXXXXXXXXXXXXX
14

US20120254876A1

(Douglas L. Bishop, 2012)
(Original Assignee) Honeywell International Inc     

(Current Assignee)
Honeywell International Inc
Systems and methods for coordinating computing functions to accomplish a task identify one configured to store

readable storage readable storage

computing device computing device

store instructions memory location

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses automatic billing of a customer subscriber when the customer makes a purchase signs up for a subscription…

discloses noti cation preferences provided by a user information regarding variables and parameters of a user that…

discloses a global XML web services architecture global web services environment which is built on XML web service…

discloses performing an SQL join operation on two streams of data one of which is ltered data using a SELECT statement…
XXXXXXXXXX
15

KR20120100644A

(김용민, 2012)
(Original Assignee) 삼성탈레스 주식회사     공통 메시지 분배기 및 그 윈도우 메시지 전송 방법 second message second message

queue user table 컨텐츠

consumer worker information 제어부

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXX
16

US20110282948A1

(Krishna Vitaldevara, 2011)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Email tags second server client devices

second message email messages

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses as his invention an event driven and conditional rule based mail messaging system which can be transparently…

teaches it detects if the message C incoming email message is a command message then the user acts upon the command or…

teaches a clientserver system comprising a HTTP server module…

teaches the mobile device sending the session key to the server to retrieve the message attachment but this requires…
XXXXX
17

US20110213991A1

(Andrew Wolfe, 2011)
(Original Assignee) Empire Technology Development LLC     

(Current Assignee)
Empire Technology Development LLC
Processor core communication in multi-core processor datacenter queue, datacenter queue information more processor cores

computing device computing device

identifying one control signals

XXXXXXXXXXXXXXXXXX
18

US20100185665A1

(Monroe Horn, 2010)
(Original Assignee) SUNSTEIN KANN MURPHY AND TIMBERS LLP     

(Current Assignee)
SUNSTEIN KANN MURPHY AND TIMBERS LLP
Office-Based Notification Messaging System identify one configured to store

producer worker, determining matching producer worker message recipients

readable storage readable storage

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches the invention substantially as claimed including a method system and article for processing solicited…

teaches it detects if the message C incoming email message is a command message then the user acts upon the command or…

discloses at least one trust category comprising a suspicious message category see…

discloses the claimed subject matter as discussed above in claim…
XXXXXXXXXXXXX
19

US20110138400A1

(Allan T. Chandler, 2011)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Automated merger of logically associated messages in a message queue store instructions host computing platform

producer worker one processor

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXX
20

CN101668019A

(黄翔, 2010)
(Original Assignee) ZTE Corp     网关确定方法、装置和消息发送方法、系统 second message 个多媒体消息

queue user table 标识对应

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the stories for transmission to the end user station are selected on the basis of content of the story and…

teaches the servers are ranked according to parameters for optimizing access column…

teaches a telephonydata application interface that converts spoken queries into text for electronic commands…

discloses the display of the annotated document on a web browser…
XXXXXX
21

US20100191783A1

(Robert S. Mason, 2010)
(Original Assignee) Nasuni Corp     

(Current Assignee)
Nasuni Corp
Method and system for interfacing to cloud storage second message structured data

identifying one identifying one

delete command more user

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses wherein the memory areas include a writing sector pars…

teaches of a method in a distributed computer system capable of redundantly storing a plurality of data objects as…

teaches that there is a priority or importance associated with files and their respective versions when deciding on…

discloses a centrally located database and central network connected location of storage and recovery services pars…
XXXXX
22

US20100010671A1

(Atsushi Miyamoto, 2010)
(Original Assignee) Sony Corp     

(Current Assignee)
Sony Corp
Information processing system, information processing method, robot control system, robot control method, and computer program message request reception information

producer worker respective processes

queue user table transmission source

datacenter queue different computer

matching producer, determining matching producer worker order r

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the method and system are implemented on a portable electronic device col…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXXXXXXXXXXXXXX
23

EP2449849A1

(Harsh Jahagirdar, 2012)
(Original Assignee) Nokia Oyj     

(Current Assignee)
Nokia Oyj
Resource allocation VMM application more system settings

computing device computing device

producer worker one processor

35 U.S.C. 103(a)

35 U.S.C. 102(e)
describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches a system and method for exchanging information among exchange applications…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXXXXX
24

US20100161753A1

(Gerhard Dietrich Klassen, 2010)
(Original Assignee) Research in Motion Ltd     

(Current Assignee)
BlackBerry Ltd
Method and communication device for processing data for transmission from the communication device to a second communication device message request instant messaging

readable storage device said database

store instructions said memory

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches a system for transmitting data as claimed in claim…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a detection component coupled to the processor wherein the detection component comprises a sensor for…

teaches the status report transmitted from the mobile unit to the user interface unit according to one of SMTP POP…
XXXXXXXXX
25

US20100107176A1

(Joerg Kessler, 2010)
(Original Assignee) SAP SE     

(Current Assignee)
SAP SE
Maintenance of message serialization in multi-queue messaging environments identifying one selection criteria

datacenter queue information transmission time

second message queue management

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXX
26

JP2010020650A

(Atsushi Miyamoto, 2010)
(Original Assignee) Sony Corp; ソニー株式会社     情報処理システム及び情報処理方法、ロボットの制御システム及び制御方法、並びコンピュータ・プログラム intercept module 受信モジュール

readable storage device 少なくとも

first virtual machine えること

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the method and system are implemented on a portable electronic device col…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXX
27

US20080270536A1

(James Louis Keesey, 2008)
(Original Assignee) James Louis Keesey; Gerald Johann Wilmot     Document shadowing intranet server, memory medium and method computing device elapsed time

store instructions said memory

XXXXXXXXXX
28

US20090254920A1

(Vladimir D. Truschin, 2009)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
Extended dynamic optimization of connection establishment and message progress processing in a multi-fabric message passing interface implementation datacenter queue multi-core processor

second message second message

local processing first spin

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXXXX
29

US20090022285A1

(Scott Swanburg, 2009)
(Original Assignee) AT&T Mobility II LLC     

(Current Assignee)
AT&T Mobility II LLC
Dynamic Voicemail Receptionist System identify one configured to store

store instructions store instructions

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the status report transmitted from the mobile unit to the user interface unit according to one of SMTP POP…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches a communication system comprising A mobile unit having a processor a memory and a wireless modem for…
XX
30

US20090241118A1

(Krishna K. Lingamneni, 2009)
(Original Assignee) American Express Travel Related Services Co Inc     

(Current Assignee)
Liberty Peak Ventures LLC
System and method for processing interface requests in batch intercept module general purpose computer

queue requests requesting application

readable storage readable storage

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXX
31

US20090234908A1

(Marc D. Reyhner, 2009)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Data transmission queuing using fault prediction consumer worker information common component

second message queue management

second server remote computer

identifying one second data

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXX
32

EP1939743A2

(Franz Weber, 2008)
(Original Assignee) SAP SE     

(Current Assignee)
SAP SE
Event correlation second message, message request incoming messages

identifying one second data

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXX
33

US20090113440A1

(Jared B. Dorny, 2009)
(Original Assignee) Raytheon Co     

(Current Assignee)
Raytheon Co
Multiple Queue Resource Manager network traffic priority level

producer worker one processor

computing device elapsed time

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXXXXX
34

US20080077939A1

(Richard Michael Harran, 2008)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Solution for modifying a queue manager to support smart aliasing which permits extensible software to execute against queued data without application modifications readable storage device additional processing

second server given operation

identifying one following steps

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXX
35

US20070239838A1

(James Laurel, 2007)
(Original Assignee) Nokia Oyj; Twango Inc     

(Current Assignee)
Nokia Technologies Oy
Methods and systems for digital content sharing network connection third parties

datacenter queue more servers

message request second email, first email

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches it detects if the message C incoming email message is a command message then the user acts upon the command or…

teaches a communication system comprising A mobile unit having a processor a memory and a wireless modem for…
XXXXXXXXXXXXXXXX
36

US20080212602A1

(Alphana B. Hobbs, 2008)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Method, system and program product for optimizing communication and processing functions between disparate applications first server second request

identifying one data elements

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the smart video display unit performs a configuration check in conjunction with a configuration identification…

discloses using last come first serve logic with a MAC layer…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches wherein the additional software comprises software for continuously monitoring interfaces and internal…
XXXXXX
37

US20080148281A1

(William R. Magro, 2008)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
RDMA (remote direct memory access) data transfer in a virtual environment store instructions remote direct memory access

datacenter queue, VMM application virtual machine monitor

second server, second message second virtual machine

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the claimed limitations wherein providing a tenant with user access to the generated data collection…

discloses the claimed computer program product and apparatus for reconciling billing measures to cost factors the…

teaches a routing application stored on and executing from a memory media of the routing engine…

teaches a method of managing memory of a database management system database server applications…
XXXXXXXXXXXXXXXXXXX
38

US20070198437A1

(Mark Eisner, 2007)
(Original Assignee) FireStar Software Inc     

(Current Assignee)
FireStar Software Inc
System and method for exchanging information among exchange applications consumer worker including information

identify one configured to store

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses wherein a message contains metadata and is processed…

discloses determining from the request a specified template paragraphs…

discloses that patient data may be converted from a proprietary format to a common format and from a common format to a…

discloses a method of asynchronously communicating with a web application comprising receiving one or more messages from…
XXXXXXXXXXXXX
39

EP1955281A2

(Mark Eisner, 2008)
(Original Assignee) FireStar Software Inc     

(Current Assignee)
FireStar Software Inc
System and method for exchanging information among exchange applications second server remote client application

identify one configured to store

delete command XML documents

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses wherein a message contains metadata and is processed…

discloses determining from the request a specified template paragraphs…

discloses that patient data may be converted from a proprietary format to a common format and from a common format to a…

discloses a method of asynchronously communicating with a web application comprising receiving one or more messages from…
XXXXXXX
40

US20070180150A1

(Mark Eisner, 2007)
(Original Assignee) FireStar Software Inc     

(Current Assignee)
FireStar Software Inc
System and method for exchanging information among exchange applications second server remote client application

identify one configured to store

delete command XML documents

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses wherein a message contains metadata and is processed…

discloses determining from the request a specified template paragraphs…

discloses that patient data may be converted from a proprietary format to a common format and from a common format to a…

discloses a method of asynchronously communicating with a web application comprising receiving one or more messages from…
XXXXXXX
41

US20070168301A1

(Mark Eisner, 2007)
(Original Assignee) FireStar Software Inc     

(Current Assignee)
FireStar Software Inc
System and method for exchanging information among exchange applications consumer worker including information

identify one configured to store

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses wherein a message contains metadata and is processed…

discloses determining from the request a specified template paragraphs…

discloses that patient data may be converted from a proprietary format to a common format and from a common format to a…

discloses a method of asynchronously communicating with a web application comprising receiving one or more messages from…
XXXXXXXXXXXXX
42

US20080075015A1

(Ossi Lindvall, 2008)
(Original Assignee) Nokia Oyj     

(Current Assignee)
Provenance Asset Group LLC ; Nokia USA Inc
Method for time-stamping messages delete command memory access controller

computing device computing device

producer worker one processor

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses that interrupts can be generated by hardware timers…

teaches event recognition by a state machine whose state is dependent upon historical information…

teaches the other limitations of the claim in parts beyond the cited portions…

discloses a buffer for synchronizing the at least one trigger signal with the at least one counter column…
XXXXXXXXXXXXXXXXX
43

US20070123280A1

(Faith McGary, 2007)
(Original Assignee) Mcgary Faith; Ian Bacon; Michael Bates; Christine Baumeister     

(Current Assignee)
Grape Technology Group Inc
System and method for providing mobile device services using SMS communications identify one configured to store

second server embedded code

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches when a message is sent from an outside email source to a mobile phone…

discloses the system thanking the user for the call and informing the user that a text message will be sent to the user…

teaches the invention substantially as claimed and described in claims…

discloses A method for receiving live human feedback of an image provided using a mobile device equipped with a camera…
XXXXX
44

US20070005713A1

(Thierry LeVasseur, 2007)
(Original Assignee) 0733660 BC Ltd (DBA E-MAIL2)     

(Current Assignee)
Appriver Canada Ulc
Secure electronic mail system consumer worker including information

matching producer encryption method

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses a list of publicprivate key pairs are stored at a server wherein the private key is stored in an encrypted…

teaches storing destination information and displaying the destination information on the updateable electronic…

discloses a method comprising the steps of composing an electronic message that includes non private information to be…

teaches a unique identification code for a device a group of N devices a set identification code and a corresponding…
XXXXXXXXXXXXX
45

US20070113101A1

(Thierry LeVasseur, 2007)
(Original Assignee) 0733660 BC Ltd (DBA E-MAIL2)     

(Current Assignee)
Appriver Canada Ulc
Secure electronic mail system with configurable cryptographic engine identify one configured to store

second server client component

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
discloses a list of publicprivate key pairs are stored at a server wherein the private key is stored in an encrypted…

teaches storing destination information and displaying the destination information on the updateable electronic…

discloses a method comprising the steps of composing an electronic message that includes non private information to be…

teaches a unique identification code for a device a group of N devices a set identification code and a corresponding…
XXXXX
46

US20070288931A1

(Gokhan Avkarogullari, 2007)
(Original Assignee) PortalPlayer Inc     

(Current Assignee)
Nvidia Corp
Multi processor and multi thread safe message queue with hardware assistance queue requests exchanging messages

network traffic said determination

second message queue management

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches sharing objects with programs developed in different languages including C C and…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches wherein said security module selectively purges all of the data in said shared memory APA pages…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…
XXXXXX
47

US20070174398A1

(Frank Addante, 2007)
(Original Assignee) StrongMail Systems Inc     

(Current Assignee)
Selligent Inc
Systems and methods for communicating logic in e-mail messages queue user table central processing

readable storage readable storage

store instructions database query

message request web service

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the status report transmitted from the mobile unit to the user interface unit according to one of SMTP POP…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a communication system comprising A mobile unit having a processor a memory and a wireless modem for…
XXXXXXXXXXXX
48

US20060146991A1

(J. Thompson, 2006)
(Original Assignee) Tervela Inc     

(Current Assignee)
Tervela Inc
Provisioning and management in a message publish/subscribe system first server external authentication

computing device network bandwidth

network traffic dynamic resource

producer worker data message

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses one or more interfaces to one or more communications channels that may include one or more interfaces to user…

discloses a publicationsubscriber environment in which messages flow from a message broker…

discloses that a message broker can receive raw stock trade information such as price and volume from the NYSE and…

discloses A client session s time stamp is updated each time a message transaction containing the session id for the…
XXXXXXXXXXXXXXXX
49

US20070156834A1

(Radoslav Nikolov, 2007)
(Original Assignee) SAP SE     

(Current Assignee)
SAP SE
Cursor component for messaging service command channel acknowledging receipt

store instructions said memory

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches a routing application stored on and executing from a memory media of the routing engine…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

describes a plurality of graphics primitives of the first display frame…
XXX
50

US20060168070A1

(J. Thompson, 2006)
(Original Assignee) Tervela Inc     

(Current Assignee)
Tervela Inc
Hardware-based messaging appliance network connection network connection

queue user table central processing

second message, message request incoming messages

readable storage device later retrieval

identifying one second groups

producer worker data message

network traffic data plane

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses one or more interfaces to one or more communications channels that may include one or more interfaces to user…

discloses a publicationsubscriber environment in which messages flow from a message broker…

discloses that a message broker can receive raw stock trade information such as price and volume from the NYSE and…

discloses A client session s time stamp is updated each time a message transaction containing the session id for the…
XXXXXXXXXXXXXXXXX
51

US20060146999A1

(J. Thompson, 2006)
(Original Assignee) Tervela Inc     

(Current Assignee)
Tervela Inc
Caching engine in a messaging system readable storage, readable storage device management system

second server complete message

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses one or more interfaces to one or more communications channels that may include one or more interfaces to user…

discloses a publicationsubscriber environment in which messages flow from a message broker…

discloses that a message broker can receive raw stock trade information such as price and volume from the NYSE and…

discloses A client session s time stamp is updated each time a message transaction containing the session id for the…
XXXX
52

US7624250B2

(Sinn Wee Lau, 2009)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
Heterogeneous multi-core processor having dedicated connections between processor cores local processing multiple register

datacenter queue same instruction

readable storage same function

XXXXXXXXXXXXXX
53

US20070094664A1

(Kimming So, 2007)
(Original Assignee) Broadcom Corp     

(Current Assignee)
Avago Technologies General IP Singapore Pte Ltd
Programmable priority for concurrent multi-threaded processors first server second request

network traffic priority level

store instructions main memory

queue cache cache line

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches data processing elements having vector registers vector units…

discloses the claimed invention except for where said accessing results in a cache miss wherein said method further…

teaches wherein the processor device is adapted for the sequential processing unit to be blocked from accessing some…

teaches using application specific multimedia DSP and other kinds of coprocessors it does not teach the data…
XXXXXXXX
54

US20060031568A1

(Vadim Eydelman, 2006)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Adaptive flow control protocol readable storage, readable storage device transferring data

identifying one following steps

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses transfer operation to a remote memory in a remote system with other memory buffers in the local system see…

discloses link adaptation is a dynamic selection of modulation and coding schemes based on radio link quality column…

discloses a data transfer between two applications or devices…

teaches when said counter is equal to at least a predetermined value and decrementing said counter by said byte size…
XX
55

US20070168567A1

(William Boyd, 2007)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
System and method for file based I/O directly between an application instance and an I/O adapter producer worker, consumer worker storage location

queue cache, queue requests system memory, I/O request

message request start address

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches sharing objects with programs developed in different languages including C C and…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches wherein said security module selectively purges all of the data in said shared memory APA pages…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…
XXXXXXXXXXXXXXXXXXX
56

US20070005572A1

(Travis Schluessler, 2007)
(Original Assignee) Intel Corp     

(Current Assignee)
Intel Corp
Architecture and system for host management message request message request

second message second message

command channel second buffer

35 U.S.C. 103(a)

35 U.S.C. 102(b)

35 U.S.C. 102(e)
teaches that sensitive data such as patient records are securely transferred between a programmer and a data…

discloses an electronic health care compliance assistance comprising a timer for tracking total time and patient…

teaches a GUI for display within a touch screen display of a handheld device wherein the handheld device is configured…

teaches a medical retrieval method that incorporates the use of codes to identify relevant medical data col…
XXXXXXXXXXX
57

US20060184948A1

(Alan Cox, 2006)
(Original Assignee) Red Hat Inc     

(Current Assignee)
Red Hat Inc
System, method and medium for providing asynchronous input and output with less system calls to and from an operating system computing device computing device

producer worker one processor

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXXXXX
58

US20050071316A1

(Ilan Caron, 2005)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network store instructions second instruction, third instruction

consumer worker one location

identifying one second data

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXXXXX
59

US20050091239A1

(Wayne Ward, 2005)
(Original Assignee) Unisys Corp     

(Current Assignee)
Unisys Corp
Queue bank repository and method for sharing limited queue banks in memory VMM application more available e

readable storage, readable storage device address space

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches A method comprising collecting a first information on a plurality of programs waiting on or holding a…

discloses dequeuing by the server read and write requests from the client computing device…

teaches transferring changed data to a second server computer pg…

teaches a background task carried out regularly to merge copies of data col…
XXXXXXXXXX
60

US20060036697A1

(Jun-Liang Lin, 2006)
(Original Assignee) Taiwan Semiconductor Manufacturing Co TSMC Ltd     

(Current Assignee)
Taiwan Semiconductor Manufacturing Co TSMC Ltd
Email system and method thereof readable storage readable storage

queue user table predefined value

readable storage device access authority

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches receiving an email at the user s PC host system via LAN…

teaches the status report transmitted from the mobile unit to the user interface unit according to one of SMTP POP…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…

teaches a detection component coupled to the processor wherein the detection component comprises a sensor for…
XXXXX
61

CN1508682A

(A・康杜, 2004)
(Original Assignee) 国际商业机器公司     任务调度的方法、系统和设备 datacenter queue, datacenter queue information 一个队列

local processing 计算一个

message request 这些请求

network connection 调度装置

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXXXXXXXXXX
62

JP2004199678A

(Ashish Kundu, 2004)
(Original Assignee) Internatl Business Mach Corp <Ibm>; インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation     タスク・スケジューリングの方法、システム、およびプログラム製品 readable storage device 少なくとも

queue requests の要求

queue user table ヘッダ

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXX
63

EP1432188A1

(Young-Hoon Kim, 2004)
(Original Assignee) Samsung Electronics Co Ltd     

(Current Assignee)
Samsung Electronics Co Ltd
Email client and email facsimile machine delete command delete command

second message email messages

second server email client

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches wherein the cover sheet template editor is operated remotely via a deviceembedded user interface user may…

teaches a multifunction peripheral MFP device having a facsimile capability iFax machine col…

discloses a communication system and method for pushing electronic messages to a wireless portable device arrival…

teaches transmitting first data representing changes to a first data class from the first data processing system to…
XXXXXXX
64

US7337214B2

(Michael Douglass, 2008)
(Original Assignee) YHC Corp     

(Current Assignee)
YHC Corp
Caching, clustering and aggregating server network connection network connection

consumer worker, producer worker information storage units

second server second server

XXXXXXXXXXXXXXXX
65

US20040107259A1

(Andrew Wallace, 2004)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
Routing of electronic messages using a routing map and a stateful script engine network traffic second client

identifying one second data

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the invention as claimed including the method of claim…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches it detects if the message C incoming email message is a command message then the user acts upon the command or…

teaches a communication system comprising A mobile unit having a processor a memory and a wireless modem for…
XXXX
66

US20050015763A1

(William Alexander, 2005)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Method and system for maintaining consistency during multi-threaded processing of LDIF data consumer worker, consumer worker pairs consecutive manner

queue user table loading data

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches wherein the means for generating a key is adapted to exclude a time andor user indication contained in a…

discloses inserting the callback data into the callback list according to the callback order as a queue manager that…

discloses the invention as claimed including a system for automatically updating of computer access settings ie…

discloses the first read request is a wildcard read request the method further comprising generating a queue specific…
XXXXXXXXXXXXXX
67

US20040252709A1

(Samuel Fineberg, 2004)
(Original Assignee) Hewlett Packard Development Co LP     

(Current Assignee)
Hewlett Packard Development Co LP
System having a plurality of threads being allocatable to a send or receive queue store instructions remote direct memory access

queue cache memory accesses

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the invention as claimed including the method of claim…

discloses an of oad method comprising communicating data over a network utilizing a plurality of protocols associated…

discloses search apparatus which causes a plurality of management systems to execute search in parallel the search…

teaches compiling security policies into rules and creating a rules database in…
XXXXX
68

US20040215847A1

(Shelly Dirstine, 2004)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Autonomic I/O adapter response performance optimization using polling store instructions software component

readable storage, delete command I/O device

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

teaches the database system is an in memory database system database server…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXX
69

EP1474746A1

(Thomas E. Hamilton, 2004)
(Original Assignee) Proquent Systems Corp     

(Current Assignee)
Proquent Systems Corp
Management of message queues identifying one predetermined criterion, identifying one

second message second message

XXX
70

US20040117794A1

(Ashish Kundu, 2004)
(Original Assignee) International Business Machines Corp     

(Current Assignee)
International Business Machines Corp
Method, system and framework for task scheduling queue user table predefined value

first server, network traffic load balancing

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…
XXXXXXXXXXX
71

US20040107240A1

(Boris Zabarski, 2004)
(Original Assignee) Conexant Inc     

(Current Assignee)
Conexant Inc ; Brooktree Broadband Holding Inc
Method and system for intertask messaging between multiple processors identifying one associated process

second message second message

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXX
72

US20030028607A1

(Graham Miller, 2003)
(Original Assignee) Graham Miller; Michael Hanson; Brian Axe; Evans Steven Richard     

(Current Assignee)
METRICSTREAM Inc
Methods and systems to manage and track the states of electronic media first server, second server client terminals

producer worker information time stamp

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the approval of the book is based on the votes received online through a wide area network connection from at…

discloses the display of the annotated document on a web browser…

teaches ltering information based on the content standard predetermined by the client or the community column…

teaches a detection component coupled to the processor wherein the detection component comprises a sensor for…
XXXXXXXXX
73

US20030014551A1

(Kunihito Ishibashi, 2003)
(Original Assignee) Future System Consulting Corp     

(Current Assignee)
Future Architect Inc
Framework system delete command monitoring operation

second message queue management

store instructions ring buffer

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses all subject matter of the claimed invention as discussed above with respect to claims…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches diverting said email message from delivery to the folder…

discloses a similar method of providing electronic group card in which when a signerparticipant signs the card heshe is…
XXXX
74

US20030055668A1

(Amitabh Saran, 2003)
(Original Assignee) TriVium Systems Inc     

(Current Assignee)
TriVium Systems Inc
Workflow engine for automating business processes in scalable multiprocessor computer platforms second message second messages

datacenter queue information third data set

delete command value pair

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXX
75

US20030097457A1

(Amitabh Saran, 2003)
(Original Assignee) Amitabh Saran; Mathews Manaloor; Arun Maheshwari; Sanjay Suri; Tarak Goradia     Scalable multiprocessor architecture for business computer platforms second message connected components

queue requests exchanging messages

message request message request

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

describes workflow initiation according to an organizational relationship between the usersbusiness objects see claim…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXX
76

US20040019643A1

(Robert Zirnstein, 2004)
(Original Assignee) Canon Inc     

(Current Assignee)
Canon Inc
Remote command server producer worker predetermined location

determine matching producer worker obtaining output data

message request email address data

network traffic wireless telephone

consumer worker pairs body portion

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses sending the message to the intended recipient after parsing the message…

teaches wherein the email server comprises a portion of an…

teaches that the step of responding to said user terminals is performed by transmitting to each of said user terminals…

discloses if the extracted command is instead a request for a web page then command server module selects a function…
XXXXXXXXXXXXXXXX
77

CN1437146A

(叶天正, 2003)
(Original Assignee) 国际商业机器公司     撰写、浏览、答复、转发电子邮件的方法和电子邮件客户机 co-located workers 电子邮件系统

queue requests 包含的

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the stories for transmission to the end user station are selected on the basis of content of the story and…

teaches wherein the graphical user interface may be operable to display a noti cation alert generated by the…

discloses the referral list is formatted into an SMS application message and is pushed into and appears on the callers…

discloses a communication system and method for pushing electronic messages to a wireless portable device arrival…
XXX
78

US20030135618A1

(Ravikumar Pisupati, 2003)
(Original Assignee) Hewlett Packard Co     

(Current Assignee)
Hewlett Packard Development Co LP
Computer network for providing services and a method of providing services with a computer network identifying one computing resources

second message email messages

consumer worker information web pages

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches the stories for transmission to the end user station are selected on the basis of content of the story and…

teaches wherein the graphical user interface may be operable to display a noti cation alert generated by the…

discloses the referral list is formatted into an SMS application message and is pushed into and appears on the callers…

discloses a communication system and method for pushing electronic messages to a wireless portable device arrival…
XXXXX
79

EP1347390A1

(K. c/o Future System Consulting Corp. ISHIBASHI, 2003)
(Original Assignee) Future System Consulting Corp     

(Current Assignee)
Future System Consulting Corp
Framework system delete command monitoring operation

second message queue management

store instructions ring buffer

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses all subject matter of the claimed invention as discussed above with respect to claims…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches diverting said email message from delivery to the folder…

discloses a similar method of providing electronic group card in which when a signerparticipant signs the card heshe is…
XXXX
80

JP2001285287A

(Jerremy Holland, 2001)
(Original Assignee) Agilent Technol Inc; アジレント・テクノロジーズ・インク     プレフィルタリング及びポストフィルタリングを利用したパブリッシュ/サブスクライブ装置及び方法 local processing スクライブ装置

first server, second server クライアント

readable storage device 少なくとも

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the display of a visual indicator that serves to notify a user of an event…

teaches the method further comprising requesting the sender to indicate a priority level of the first message ie…

teaches teach processing the electronic file comprises parsing the electronic file and the address information of the…

teaches a content sharing system in which content of multimedia data on a server is shared with clients of a plurality…
XXXXX
81

US20010025300A1

(Graham Miller, 2001)
(Original Assignee) Zaplet Inc     

(Current Assignee)
METRICSTREAM Inc
Methods and systems to manage and track the states of electronic media first server, second server client terminals

producer worker information time stamp

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
teaches the approval of the book is based on the votes received online through a wide area network connection from at…

discloses the display of the annotated document on a web browser…

teaches ltering information based on the content standard predetermined by the client or the community column…

teaches a detection component coupled to the processor wherein the detection component comprises a sensor for…
XXXXXXXXX
82

US20020120664A1

(Robert Horn, 2002)
(Original Assignee) Aristos Logic Corp     

(Current Assignee)
Aristos Logic Corp
Scalable transaction processing pipeline queue requests logical block address

second message queue management

queue information first subset

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses methods and systems for managing integration of a heterogeneous application landscape are disclosed and…

describes the inclusion of a hierarchical approach to resource organization as well as the assignment of roles to each…

teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

discloses the receiving and routing of a response message by the…
XXXXXX
83

JP2000155693A

(Hiroaki Komine, 2000)
(Original Assignee) Fujitsu Ltd; 富士通株式会社     メッセージ制御装置 second server, queue user table 管理テーブル, テーブル中

first virtual machine えること

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches of a workflow engine for automating business processes in scalable multiprocessor computer platforms including…

teaches of a system and method for managing content by workflows including a recipient of the message inheriting the…

teaches the database system is an in memory database system database server…

describes the use of LDAP as the standard directory service protocol for virtually all modern email systems…
XXXXXXXXXX
84

CN102930427A

(潘世行, 2013)
(Original Assignee) Huaqin Telecom Technology Co Ltd     

(Current Assignee)
Huaqin Telecom Technology Co Ltd
日程管理方法及其移动终端 local processing 进行解析

queue requests 请求信息

message request 包括访问

XXXXXXXXX
85

CN102891779A

(徐立人, 2013)
(Original Assignee) BEIJING WRD TECHNOLOGY Co Ltd     

(Current Assignee)
BEIJING WRD TECHNOLOGY Co Ltd
用于ip网络的大规模网络性能测量系统和方法 producer worker, consumer worker 结果进行

second virtual machine, virtual machine manager 周期时间

first server 传输协议

determine matching producer worker 测量过程

XXXXXXXXXXXXXXXX
86

CN102800014A

(吴林, 2012)
(Original Assignee) BEIJING TEAMSUN SOFTWARE TECHNOLOGY Co Ltd; Beijing Teamsun Technology Co Ltd     

(Current Assignee)
BEIJING TEAMSUN SOFTWARE TECHNOLOGY Co Ltd ; Beijing Teamsun Technology Co Ltd
一种用于供应链融资的金融数据处理方法 command channel 数据通道

computing device to provide local processing 的缓存

XXX
87

CN102622426A

(俞晓鸿, 2012)
(Original Assignee) HANGZHOU SHANLIANG TECHNOLOGY Co Ltd     

(Current Assignee)
HANGZHOU SHANLIANG TECHNOLOGY Co Ltd
数据库写入系统及方法 queue cache 个状态

network connection 请求时

computing device to provide local processing 的缓存

XXXXXXX
88

CN102646064A

(A.贝斯巴鲁亚, 2012)
(Original Assignee) Microsoft Corp     

(Current Assignee)
Microsoft Technology Licensing LLC
支持迁移的增量虚拟机备份 computing device 一种计算设备

delete command, store instructions 存储的指令

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches a simulator simulating the computer system under design…

teaches starting simulation of the process from the process checkpoint by resuming simulation of a restart application…

teaches that a plurality of snapshots are stored chronically col…

describes that one method for data backup and recovery is accomplished through a hard disk partition…
XXXXXXXXXXX
89

CN102479108A

(孙鹏, 2012)
(Original Assignee) Institute of Acoustics of CAS     

(Current Assignee)
Institute of Acoustics of CAS
一种多应用进程的嵌入式系统终端资源管理系统及方法 virtual machine manager 终端的图像

queue usage information 使用状态, 的使用

delete command 调度和

consumer worker pairs 冲突时

XXXXXXXXX
90

CN102741843A

(王震, 2012)
(Original Assignee) Qingdao Hisense Media Network Technology Co Ltd     

(Current Assignee)
Juhaokan Technology Co Ltd
从数据库中读取数据的方法及装置 store instructions 数据更新

queue user table 标识对应, 的标识

XXXXXX
91

WO2012101464A1

(Andreas Johnsson, 2012)
(Original Assignee) Telefonaktiebolaget L M Ericsson (Publ)     Method for queuing data packets and node therefore producer worker information received data packet

identify one configured to store

datacenter queue information transmission time

35 U.S.C. 103(a)

35 U.S.C. 102(b)
teaches the motivation for separating charges based on use in order to balance the total margin for a user s…

discloses a method of managing a data access system where transfer of data between a content server and a remote site of…

teaches carrier may add or offset a subscribers bill based on service level agreement which makes obvious that…

teaches the tracking of data transfers within a network system…
XXX
92

JP2012155440A

(Takeya Fujimoto, 2012)
(Original Assignee) Nec Corp; 日本電気株式会社     相互結合網制御システム、相互結合網制御方法 computing device システム

determining matching producer worker 出力先

XXXXXXXXXXX
93

KR20120111734A

(케이스 에이. 로웨리, 2012)
(Original Assignee) 어드밴스드 마이크로 디바이시즈, 인코포레이티드     프로세서 코어들의 하이퍼바이저 격리 VMM application virtual machine monitor

readable storage 판독가능

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses encoding an image signal into a digitized image signal…

discloses a componentized audio driver comprising an audio lter graph for processing an audio data stream kernel mode…

discloses a mixing system where having global effects such as chorus and reverb that can be applied in varying amounts…

discloses a method wherein a data transmission algorithm is used to ascertain network bandwidth…
XXXXXXXXXX
94

JP2012108576A

(Eisuke Ando, 2012)
(Original Assignee) Toyota Motor Corp; トヨタ自動車株式会社     マルチコアプロセッサ、処理実行方法、プログラム command channel, delete command の命令

computing device 優先順

XXXXXXXXXXXXX
95

WO2011071624A2

(Bradley Wheeler, 2011)
(Original Assignee) Microsoft Corporation     Cloud computing monitoring and management system identify one configured to store

readable storage, readable storage device management system

second server remote device

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses injecting static routes dynamically or a user con guring the routes see col…

discloses automatic billing of a customer subscriber when the customer makes a purchase signs up for a subscription…

discloses a user interface that allows the modi cation and deletion disable of key words which are used to generate the…

discloses a global XML web services architecture global web services environment which is built on XML web service…
XXXXX
96

CN101923491A

(过敏意, 2010)
(Original Assignee) Shanghai Jiaotong University     

(Current Assignee)
Shanghai Jiaotong University
多核环境下线程组地址空间调度和切换线程的方法 queue requests 包含的

queue cache 当线程

delete command 调度和

XXXXXXX
97

CN101699806A

(汪林风, 2010)
(Original Assignee) ZTE Corp     

(Current Assignee)
ZTE Corp
网间消息互通网关、系统及方法 second message 的多媒体消息

message request 失败消息

queue usage information 接收消息

network traffic 在对接

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(b)
discloses the feature even when a data connection does not exist between the wireless communication device and the…

teaches a method of facilitating interactive communication as claimed in claim…

teaches a receiver for receiving positioning data from satellites allowing the processor to use the positioning data…

teaches the memory storing the status report for a predefined length of time after the status report is transmitted to…
XXXXXXXXXXXXXX
98

JP2010278484A

(Toshiyuki Kamiya, 2010)
(Original Assignee) Hitachi Ltd; 株式会社日立製作所     メール中継装置 computing device to provide local processing のサイズ

queue requests の要求

35 U.S.C. 103(a)

35 U.S.C. 102(e)
discloses a user to browses a store of content stored inside the mobile phone and make selection from the browser…

discloses a transferred data candidate display means as applied above and allow user to select a content as above…

discloses the received TCP header data ACK HLEN CODE WINDOW and CHECKSUM elds in FIG…

teaches loading keys and certificates to the wireless device via a wired connection between the wireless device and…
XXX
99

WO2009111799A2

(Peter Nickolov, 2009)
(Original Assignee) 3Tera, Inc.     Globally distributed utility computing cloud second server, second message second virtual machine

computing device application components

network connection network connection

35 U.S.C. 103(a)

35 U.S.C. 102(e)

35 U.S.C. 102(a)
discloses registering a set of appliance images to an image database…

discloses wherein the subscription model comprises at least one of licensing information or a set of entitlements to…

teaches all the subject matter as discussed above with respect to claim…

discloses that the separate portable medium may be a flash drive…
XXXXXXXXXXXXX
100

WO2009026589A2

(Fred Cohen, 2009)
(Original Assignee) Fred Cohen     Method and/or system for providing and/or analizing and/or presenting decision strategies matching producer, producer worker defined condition, storage means

readable storage device later retrieval

computing device electronic data

35 U.S.C. 103(a)

35 U.S.C. 102(b)
discloses a system for processing and reporting information and data for use by business to collect analyze…

describes the at least one activity event at the real world entity and…

discloses user manipulation of the user input such as by dragging selected element within the GUI…

teaches a method wherein providing the filter count to the user device comprises providing disabled filters…
XXXXXXXXXXXXXXXX
101

WO2009014868A2

(Yadhu Gopalan, 2009)
(Original Assignee) Microsoft Corporation     Scheduling threads in multi-core systems queue usage information, datacenter queue information multi-core processing system

datacenter queue multi-core processor

readable storage readable storage

computing device computing device

second message processing time

35 U.S.C. 103(a)

35 U.S.C. 102(e)
teaches a load balancer wherein the load balancing between the cluster is based on a priority based scheme ie…

teaches wherein each processing element includes multiple processors coupled to a single front side bus…

teaches inspecting core of the proximity list multiple times…

discloses as his invention a method and system for establishing a…
XXXXXXXXXXXXXXXXXX
102

CN101216814A

(朱而刚, 2008)
(Original Assignee) Hangzhou H3C Technologies Co Ltd     

(Current Assignee)
New H3C Technologies Co Ltd
一种多核多操作系统之间的通信方法及系统 command channel 数据通道

network connection 相互连接

XXXXX




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
USENIX Association Proceedings Of The 2006 USENIX Annual Technical Conference. : 29-42 2006

Publication Year: 2006

High Performance VMM-bypass I/O In Virtual Machines

International Business Machines Corporation

Liu, Huang, Abali, Panda, Usenix
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (virtual machine monitor) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (virtual machine monitor) ;

and modifying the message in response to receiving the signal .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (I/O device) from the datacenter queue (virtual machine monitor) , deleting the message from the datacenter queue .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device (readable storage, delete command) virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (virtual machine monitor) associated with the message request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application (virtual machine monitor) is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (virtual machine monitor) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (I/O device) from the datacenter queue (virtual machine monitor) , delete the message from the first server .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device (readable storage, delete command) virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (virtual machine monitor) associated with the message request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : observe network traffic through a network connection to detect the datacenter queue (virtual machine monitor) associated with the message .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (virtual machine monitor) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application (virtual machine monitor) is further configured to : update the queue user table based on the observed queue usage information .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application (virtual machine monitor) is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (virtual machine monitor) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application (virtual machine monitor) is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application (virtual machine monitor) is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (virtual machine monitor) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (virtual machine monitor) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (virtual machine monitor) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (virtual machine monitor) associated with the message request .
High Performance VMM-bypass I/O In Virtual Machines . Currently , I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (datacenter queue, VMM application) (VMM) and/or a privileged VM for each I/O operation , which may turn out to be a performance bottleneck for systems with high I/O demands , especially those equipped with modern high speed interconnects such as InfiniBand . In this paper , we propose a new device virtualization model called VMM-bypass I/O , which extends the idea of OS-bypass originated from user-level communication . Essentially , VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM . By exploiting the intelligence found in modern high speed network interfaces , VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation . To demonstrate the idea of VMM-bypass , we have developed a prototype called Xen-IB , which offers InfiniBand virtualization support in the Xen 3 . 0 VM environment . Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand . Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS. 17 (6): 1127-1144 JUN 1999

Publication Year: 1999

Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches

University of Virginia (UVA), International Business Machines Corporation

Liebeherr, Wrege
US9479472B2
CLAIM 1
. A method to locally process queue requests (other time) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches . All recently proposed packet-scheduling algorithms for output-buffered switches that support quality-of-service (QoS) transmit packets in some priority order , e . g . , according to deadlines , virtual finishing times , eligibility times , or other time (queue requests) stamps that are associated with a packet . Since maintaining a sorted priority queue introduces significant overhead , much emphasis on QoS scheduler design is put on methods to simplify the task of maintaining a priority queue . In this paper , we consider an approach that attempts to approximate a sorted priority queue at an output-buffered switch . The goal is to trade off less accurate sorting for lower computational overhead . Specifically , this paper presents a scheduler that approximates the sorted queue of an earliest-deadline-first (EDF) scheduler . The approximate scheduler is implemented using a set of prioritized first-in/first-out (FTFO) queues that are periodically relabeled . The scheduler can be efficiently implemented with a fixed number of pointer manipulations , thus enabling an implementation in hardware . Necessary and sufficient conditions for the worst-case delays of the scheduler with approximate sorting are presented . Numerical examples , including traces based on MPEG video , demonstrate that in realistic scenarios , scheduling with approximate sorting is a viable option .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (other time) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches . All recently proposed packet-scheduling algorithms for output-buffered switches that support quality-of-service (QoS) transmit packets in some priority order , e . g . , according to deadlines , virtual finishing times , eligibility times , or other time (queue requests) stamps that are associated with a packet . Since maintaining a sorted priority queue introduces significant overhead , much emphasis on QoS scheduler design is put on methods to simplify the task of maintaining a priority queue . In this paper , we consider an approach that attempts to approximate a sorted priority queue at an output-buffered switch . The goal is to trade off less accurate sorting for lower computational overhead . Specifically , this paper presents a scheduler that approximates the sorted queue of an earliest-deadline-first (EDF) scheduler . The approximate scheduler is implemented using a set of prioritized first-in/first-out (FTFO) queues that are periodically relabeled . The scheduler can be efficiently implemented with a fixed number of pointer manipulations , thus enabling an implementation in hardware . Necessary and sufficient conditions for the worst-case delays of the scheduler with approximate sorting are presented . Numerical examples , including traces based on MPEG video , demonstrate that in realistic scenarios , scheduling with approximate sorting is a viable option .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (time stamp) , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches . All recently proposed packet-scheduling algorithms for output-buffered switches that support quality-of-service (QoS) transmit packets in some priority order , e . g . , according to deadlines , virtual finishing times , eligibility times , or other time stamp (producer worker information) s that are associated with a packet . Since maintaining a sorted priority queue introduces significant overhead , much emphasis on QoS scheduler design is put on methods to simplify the task of maintaining a priority queue . In this paper , we consider an approach that attempts to approximate a sorted priority queue at an output-buffered switch . The goal is to trade off less accurate sorting for lower computational overhead . Specifically , this paper presents a scheduler that approximates the sorted queue of an earliest-deadline-first (EDF) scheduler . The approximate scheduler is implemented using a set of prioritized first-in/first-out (FTFO) queues that are periodically relabeled . The scheduler can be efficiently implemented with a fixed number of pointer manipulations , thus enabling an implementation in hardware . Necessary and sufficient conditions for the worst-case delays of the scheduler with approximate sorting are presented . Numerical examples , including traces based on MPEG video , demonstrate that in realistic scenarios , scheduling with approximate sorting is a viable option .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer (computational overhead) worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches . All recently proposed packet-scheduling algorithms for output-buffered switches that support quality-of-service (QoS) transmit packets in some priority order , e . g . , according to deadlines , virtual finishing times , eligibility times , or other time stamps that are associated with a packet . Since maintaining a sorted priority queue introduces significant overhead , much emphasis on QoS scheduler design is put on methods to simplify the task of maintaining a priority queue . In this paper , we consider an approach that attempts to approximate a sorted priority queue at an output-buffered switch . The goal is to trade off less accurate sorting for lower computational overhead (matching producer, determining matching producer worker) . Specifically , this paper presents a scheduler that approximates the sorted queue of an earliest-deadline-first (EDF) scheduler . The approximate scheduler is implemented using a set of prioritized first-in/first-out (FTFO) queues that are periodically relabeled . The scheduler can be efficiently implemented with a fixed number of pointer manipulations , thus enabling an implementation in hardware . Necessary and sufficient conditions for the worst-case delays of the scheduler with approximate sorting are presented . Numerical examples , including traces based on MPEG video , demonstrate that in realistic scenarios , scheduling with approximate sorting is a viable option .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer (computational overhead) and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches . All recently proposed packet-scheduling algorithms for output-buffered switches that support quality-of-service (QoS) transmit packets in some priority order , e . g . , according to deadlines , virtual finishing times , eligibility times , or other time stamps that are associated with a packet . Since maintaining a sorted priority queue introduces significant overhead , much emphasis on QoS scheduler design is put on methods to simplify the task of maintaining a priority queue . In this paper , we consider an approach that attempts to approximate a sorted priority queue at an output-buffered switch . The goal is to trade off less accurate sorting for lower computational overhead (matching producer, determining matching producer worker) . Specifically , this paper presents a scheduler that approximates the sorted queue of an earliest-deadline-first (EDF) scheduler . The approximate scheduler is implemented using a set of prioritized first-in/first-out (FTFO) queues that are periodically relabeled . The scheduler can be efficiently implemented with a fixed number of pointer manipulations , thus enabling an implementation in hardware . Necessary and sufficient conditions for the worst-case delays of the scheduler with approximate sorting are presented . Numerical examples , including traces based on MPEG video , demonstrate that in realistic scenarios , scheduling with approximate sorting is a viable option .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (other time) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches . All recently proposed packet-scheduling algorithms for output-buffered switches that support quality-of-service (QoS) transmit packets in some priority order , e . g . , according to deadlines , virtual finishing times , eligibility times , or other time (queue requests) stamps that are associated with a packet . Since maintaining a sorted priority queue introduces significant overhead , much emphasis on QoS scheduler design is put on methods to simplify the task of maintaining a priority queue . In this paper , we consider an approach that attempts to approximate a sorted priority queue at an output-buffered switch . The goal is to trade off less accurate sorting for lower computational overhead . Specifically , this paper presents a scheduler that approximates the sorted queue of an earliest-deadline-first (EDF) scheduler . The approximate scheduler is implemented using a set of prioritized first-in/first-out (FTFO) queues that are periodically relabeled . The scheduler can be efficiently implemented with a fixed number of pointer manipulations , thus enabling an implementation in hardware . Necessary and sufficient conditions for the worst-case delays of the scheduler with approximate sorting are presented . Numerical examples , including traces based on MPEG video , demonstrate that in realistic scenarios , scheduling with approximate sorting is a viable option .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (time stamp) , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches . All recently proposed packet-scheduling algorithms for output-buffered switches that support quality-of-service (QoS) transmit packets in some priority order , e . g . , according to deadlines , virtual finishing times , eligibility times , or other time stamp (producer worker information) s that are associated with a packet . Since maintaining a sorted priority queue introduces significant overhead , much emphasis on QoS scheduler design is put on methods to simplify the task of maintaining a priority queue . In this paper , we consider an approach that attempts to approximate a sorted priority queue at an output-buffered switch . The goal is to trade off less accurate sorting for lower computational overhead . Specifically , this paper presents a scheduler that approximates the sorted queue of an earliest-deadline-first (EDF) scheduler . The approximate scheduler is implemented using a set of prioritized first-in/first-out (FTFO) queues that are periodically relabeled . The scheduler can be efficiently implemented with a fixed number of pointer manipulations , thus enabling an implementation in hardware . Necessary and sufficient conditions for the worst-case delays of the scheduler with approximate sorting are presented . Numerical examples , including traces based on MPEG video , demonstrate that in realistic scenarios , scheduling with approximate sorting is a viable option .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer (computational overhead) worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
Priority Queue Schedulers With Approximate Sorting In Output-buffered Switches . All recently proposed packet-scheduling algorithms for output-buffered switches that support quality-of-service (QoS) transmit packets in some priority order , e . g . , according to deadlines , virtual finishing times , eligibility times , or other time stamps that are associated with a packet . Since maintaining a sorted priority queue introduces significant overhead , much emphasis on QoS scheduler design is put on methods to simplify the task of maintaining a priority queue . In this paper , we consider an approach that attempts to approximate a sorted priority queue at an output-buffered switch . The goal is to trade off less accurate sorting for lower computational overhead (matching producer, determining matching producer worker) . Specifically , this paper presents a scheduler that approximates the sorted queue of an earliest-deadline-first (EDF) scheduler . The approximate scheduler is implemented using a set of prioritized first-in/first-out (FTFO) queues that are periodically relabeled . The scheduler can be efficiently implemented with a fixed number of pointer manipulations , thus enabling an implementation in hardware . Necessary and sufficient conditions for the worst-case delays of the scheduler with approximate sorting are presented . Numerical examples , including traces based on MPEG video , demonstrate that in realistic scenarios , scheduling with approximate sorting is a viable option .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
PROCEEDINGS OF THE 3RD USENIX WINDOWS NT SYMPOSIUM. : 21-30 1999

Publication Year: 1999

High-performance Distributed Objects Over System Area Networks

No Affiliation

Forin, Hunt, Li, Wang, Usenix, Usenix
US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel (full advantage) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage (command channel) of modern high-speed networks . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 7
. A computing device (speed networks) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel (full advantage) associated with the datacenter queue .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage (command channel) of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 8
. The computing device (speed networks) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 9
. The computing device (speed networks) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 10
. The computing device (speed networks) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 11
. The computing device (speed networks) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 12
. The computing device (speed networks) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 13
. The computing device (speed networks) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 14
. The computing device (speed networks) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 15
. The computing device (speed networks) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 16
. The computing device (speed networks) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage of modern high-speed networks (computing device) . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel (full advantage) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
High-performance Distributed Objects Over System Area Networks . In this paper , we describe an approach to build high-performance , commercial distributed object systems over system area networks (SANs) with user-level networking . The specific platforms use in this study are the Virtual Interface Architecture (VIA) and Microsoft's Distributed Component Object Model (DCOM) . We give a detailed functional and performance analysis of DCOM and apply optimizations at several layers to take full advantage (command channel) of modern high-speed networks . Our optimizations preserve the full set of DCOM features including security , alternative threading models , and Microsoft Transaction Server (MTS) . Through extensive runtime , transport and marshaling optimization , our system achieves round-trip latencies of 72 microseconds for DCOM calls and 174 microseconds for MTS calls , and an application bandwidth of 86 . 1 megabytes per second We also examine the performance gains in real applications .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
PROCEEDINGS OF THE SECOND SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDI 96). : 245-259 1996

Publication Year: 1996

An Implementation Of The Hamlyn Sender-managed Interface Architecture

Hewlett Packard Labs

Buzzard, Jacobson, Mackey, Marovich, Wilkes, Usenix Assoc
US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (memory management) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
An Implementation Of The Hamlyn Sender-managed Interface Architecture . As the latency and bandwidth of multicomputer interconnection fabrics improve , there is a growing need for an interface between them and host processors that does not hide these gains behind software overhead . The Hamlyn interface architecture does this . If uses sender-based memory management (queue user table) to eliminate receiver buffer overruns , provides applications with direct hardware access to minimize latency , supports adaptive routing networks to allow higher throughput , and offers full protection between applications so that it can be used in a general-purpose computing environment . To test these claims we built a prototype Hamlyn interface for a Myrinet network connected to a standard HP workstation and report here on ifs design and performance . Our interface delivers an application-to-application round trip time of 28 mu s for short messages and a one way time of 17 . 4 mu s + 32 . 6ns/byte (30 . 7MB/s) for longer ones , while requiring fewer CPU cycles than an aggressive implementation of Active Messages on the CM-5 .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (memory management) based on the observed queue usage information .
An Implementation Of The Hamlyn Sender-managed Interface Architecture . As the latency and bandwidth of multicomputer interconnection fabrics improve , there is a growing need for an interface between them and host processors that does not hide these gains behind software overhead . The Hamlyn interface architecture does this . If uses sender-based memory management (queue user table) to eliminate receiver buffer overruns , provides applications with direct hardware access to minimize latency , supports adaptive routing networks to allow higher throughput , and offers full protection between applications so that it can be used in a general-purpose computing environment . To test these claims we built a prototype Hamlyn interface for a Myrinet network connected to a standard HP workstation and report here on ifs design and performance . Our interface delivers an application-to-application round trip time of 28 mu s for short messages and a one way time of 17 . 4 mu s + 32 . 6ns/byte (30 . 7MB/s) for longer ones , while requiring fewer CPU cycles than an aggressive implementation of Active Messages on the CM-5 .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (memory management) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
An Implementation Of The Hamlyn Sender-managed Interface Architecture . As the latency and bandwidth of multicomputer interconnection fabrics improve , there is a growing need for an interface between them and host processors that does not hide these gains behind software overhead . The Hamlyn interface architecture does this . If uses sender-based memory management (queue user table) to eliminate receiver buffer overruns , provides applications with direct hardware access to minimize latency , supports adaptive routing networks to allow higher throughput , and offers full protection between applications so that it can be used in a general-purpose computing environment . To test these claims we built a prototype Hamlyn interface for a Myrinet network connected to a standard HP workstation and report here on ifs design and performance . Our interface delivers an application-to-application round trip time of 28 mu s for short messages and a one way time of 17 . 4 mu s + 32 . 6ns/byte (30 . 7MB/s) for longer ones , while requiring fewer CPU cycles than an aggressive implementation of Active Messages on the CM-5 .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (memory management) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
An Implementation Of The Hamlyn Sender-managed Interface Architecture . As the latency and bandwidth of multicomputer interconnection fabrics improve , there is a growing need for an interface between them and host processors that does not hide these gains behind software overhead . The Hamlyn interface architecture does this . If uses sender-based memory management (queue user table) to eliminate receiver buffer overruns , provides applications with direct hardware access to minimize latency , supports adaptive routing networks to allow higher throughput , and offers full protection between applications so that it can be used in a general-purpose computing environment . To test these claims we built a prototype Hamlyn interface for a Myrinet network connected to a standard HP workstation and report here on ifs design and performance . Our interface delivers an application-to-application round trip time of 28 mu s for short messages and a one way time of 17 . 4 mu s + 32 . 6ns/byte (30 . 7MB/s) for longer ones , while requiring fewer CPU cycles than an aggressive implementation of Active Messages on the CM-5 .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (memory management) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
An Implementation Of The Hamlyn Sender-managed Interface Architecture . As the latency and bandwidth of multicomputer interconnection fabrics improve , there is a growing need for an interface between them and host processors that does not hide these gains behind software overhead . The Hamlyn interface architecture does this . If uses sender-based memory management (queue user table) to eliminate receiver buffer overruns , provides applications with direct hardware access to minimize latency , supports adaptive routing networks to allow higher throughput , and offers full protection between applications so that it can be used in a general-purpose computing environment . To test these claims we built a prototype Hamlyn interface for a Myrinet network connected to a standard HP workstation and report here on ifs design and performance . Our interface delivers an application-to-application round trip time of 28 mu s for short messages and a one way time of 17 . 4 mu s + 32 . 6ns/byte (30 . 7MB/s) for longer ones , while requiring fewer CPU cycles than an aggressive implementation of Active Messages on the CM-5 .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20130014114A1

Filed: 2012-09-12     Issued: 2013-01-10

Information processing apparatus and method for carrying out multi-thread processing

(Original Assignee) Sony Interactive Entertainment Inc     (Current Assignee) Sony Interactive Entertainment Inc

Akihito Nagata
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (storage location, one processor) at a first server , wherein the producer worker sends a message to a datacenter queue (push module) at least partially stored at a second server ;

storing the message in a queue cache (write access) at the first server ;

detecting a consumer worker (storage location, one processor) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 21
. An information processing apparatus according to claim 15 , wherein access to the object is read/write access (queue cache) from/to data stored in the shared memory , wherein the information concerning the current state of the object appended to the head pointer is the number of threads that read the data and the number of threads that write the data , and wherein the information concerning access requested by the subsequent thread appended to each pointer is distinguished between read and write .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (push module) ;

and modifying the message in response to receiving the signal .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (push module) , deleting the message from the datacenter queue .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (storage location, one processor) associated with the message request and the datacenter queue (push module) associated with the message request .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (storage location, one processor) prior to storing the message in the queue cache (write access) at the second server .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 21
. An information processing apparatus according to claim 15 , wherein access to the object is read/write access (queue cache) from/to data stored in the shared memory , wherein the information concerning the current state of the object appended to the head pointer is the number of threads that read the data and the number of threads that write the data , and wherein the information concerning access requested by the subsequent thread appended to each pointer is distinguished between read and write .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (storage location, one processor) on a first virtual machine ;

and executing the consumer worker (storage location, one processor) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (storage location, one processor) at a first server , wherein the producer worker sends a message to a datacenter queue (push module) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache (write access) at the first server ;

detect a consumer worker (storage location, one processor) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 21
. An information processing apparatus according to claim 15 , wherein access to the object is read/write access (queue cache) from/to data stored in the shared memory , wherein the information concerning the current state of the object appended to the head pointer is the number of threads that read the data and the number of threads that write the data , and wherein the information concerning access requested by the subsequent thread appended to each pointer is distinguished between read and write .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (push module) , delete the message from the first server .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (storage location, one processor) executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue (push module) associated with the message request .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store (identify one) a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (storage location, one processor) associated with the message .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (push module) associated with the message .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (push module) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (storage location, one processor) information , consumer worker (storage location, one processor) information , datacenter queue (push module) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (push module) based on the observed queue usage information .
US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (storage location, one processor) and consumer worker (storage location, one processor) pairs through use of the queue user table (push module) through a process to : identify a message that includes matching the producer worker to another datacenter queue (push module) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (storage location, one processor) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (storage location, one processor) and the consumer worker .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (storage location, one processor) ;

store the message in the queue cache (write access) ;

and provide the intercepted message to the consumer worker (storage location, one processor) in response to the message request .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 21
. An information processing apparatus according to claim 15 , wherein access to the object is read/write access (queue cache) from/to data stored in the shared memory , wherein the information concerning the current state of the object appended to the head pointer is the number of threads that read the data and the number of threads that write the data , and wherein the information concerning access requested by the subsequent thread appended to each pointer is distinguished between read and write .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (storage location, one processor) at a first server , wherein the producer worker sends a message to a datacenter queue (push module) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache (write access) at the first server ;

detecting a consumer worker (storage location, one processor) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 21
. An information processing apparatus according to claim 15 , wherein access to the object is read/write access (queue cache) from/to data stored in the shared memory , wherein the information concerning the current state of the object appended to the head pointer is the number of threads that read the data and the number of threads that write the data , and wherein the information concerning access requested by the subsequent thread appended to each pointer is distinguished between read and write .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (push module) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (storage location, one processor) information , consumer worker (storage location, one processor) information , datacenter queue (push module) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (storage location, one processor) and consumer worker (storage location, one processor) pairs through use of the queue user table (push module) through a process to : identify a message that includes matching the producer worker to another datacenter queue (push module) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (storage location, one processor) associated with the message request and the datacenter queue (push module) associated with the message request .
US20130014114A1
CLAIM 1
. An information processing apparatus comprising : a memory configured to store a data queue comprised of individual data ;
and at least one processor (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) configured to process either of a data generation thread for placing data generated into the data queue or a data consumption thread for retrieving the data from the data queue , wherein , when there is no data to be retrieved in the data queue during processing of the data consumption thread , the processor placed identification information of the data consumption thread into the data queue , and wherein , when the data is to be placed into the data queue during processing of the data generation thread and when the identification information of the data consumption thread has been placed into the data queue , the processor changes a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) of the data in such a manner that the data consumption thread is acquired .

US20130014114A1
CLAIM 23
. A non-transitory computer-readable medium in which a program embedded , the program comprising : a referencing module operative to reference a queue when access needs to be made to an object requiring synchronization management during processing a thread , the queue being a pending queue requesting access to the object being structured by a linked list such that identification information of each thread is connected by a pointer indicating identification information of a subsequent thread in the queue ;
a determining module operative to determine whether or not access is granted by acquiring information concerning a current state of the object appended to a head pointer , which is a pointer indicating the identification information of a first thread in the linked list ;
and a push module (datacenter queue, queue user table, datacenter queue information) operative to place the identity information of the thread into the queue when access is not granted .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20130044749A1

Filed: 2012-03-13     Issued: 2013-02-21

System and method for exchanging information among exchange applications

(Original Assignee) FireStar Software Inc     (Current Assignee) FireStar Software Inc

Mark Eisner, Gabriel Oancea
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (including information) associated with the message request and the datacenter queue associated with the message request .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine ;

and executing the consumer worker (including information) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker (including information) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (including information) executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US20130044749A1
CLAIM 26
. A gateway for performing message-based business processes among a plurality of applications , comprising : a data store configured to store (identify one) configuration data , the configuration data including information defining one or more simple transactions that can be performed by the gateway ;
an abstract queue configured to receive a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and at least one processing unit configured to execute at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (including information) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker (including information) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (including information) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker (including information) in response to the message request .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (including information) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker (including information) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (including information) associated with the message request and the datacenter queue associated with the message request .
US20130044749A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20120233273A1

Filed: 2012-03-08     Issued: 2012-09-13

Systems and methods for message collection

(Original Assignee) James Robert Miner; Jason Paul Oettinger     

James Robert Miner, Jason Paul Oettinger
US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (delete command) from the datacenter queue , deleting the message from the datacenter queue .
US20120233273A1
CLAIM 9
. The system of paragraph 1 , the communication constituting a delete message that results in bin content in the identified bin being deleted by the processor portion ;
the processor portion identifying the command as a delete command (delete command) ;
and the effecting the command on the identified bin is constituted by the processor portion deleting content from the identified bin .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (second data) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20120233273A1
CLAIM 16
. A method for processing communication content from a user device of a user , the user device assigned a routing character string , the method performed by a system in the form of a tangibly embodied computer , the method comprising : inputting an electronic communication from the user device , the electronic communication including communication content and the routing character string , and the communication content constituted by data generated as a result of , and representative of , characters keyed in to the user device by the user ;
maintaining , by the processor portion , a bin collection for the user , the bin collection including a plurality of bins , the processor portion performing processing on the communication content , the processing including : identifying the user and the bin collection of the user based on the routing character string ;
processing the communication content to identify a command and a bin including : mapping first data in the communication content to a command ;
and mapping second data (identifying one) in the communication content to a bin label ;
and the command dictating particular action to be performed by the processor portion , and the bin label identifies an identified bin , in the bin collection of the user , upon which to perform the action ;
and performing the action on the identified bin , the identified bin being one of a plurality of bins a bin collection of the user .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (delete command) from the datacenter queue , delete the message from the first server .
US20120233273A1
CLAIM 9
. The system of paragraph 1 , the communication constituting a delete message that results in bin content in the identified bin being deleted by the processor portion ;
the processor portion identifying the command as a delete command (delete command) ;
and the effecting the command on the identified bin is constituted by the processor portion deleting content from the identified bin .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (second data) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20120233273A1
CLAIM 16
. A method for processing communication content from a user device of a user , the user device assigned a routing character string , the method performed by a system in the form of a tangibly embodied computer , the method comprising : inputting an electronic communication from the user device , the electronic communication including communication content and the routing character string , and the communication content constituted by data generated as a result of , and representative of , characters keyed in to the user device by the user ;
maintaining , by the processor portion , a bin collection for the user , the bin collection including a plurality of bins , the processor portion performing processing on the communication content , the processing including : identifying the user and the bin collection of the user based on the routing character string ;
processing the communication content to identify a command and a bin including : mapping first data in the communication content to a command ;
and mapping second data (identifying one) in the communication content to a bin label ;
and the command dictating particular action to be performed by the processor portion , and the bin label identifies an identified bin , in the bin collection of the user , upon which to perform the action ;
and performing the action on the identified bin , the identified bin being one of a plurality of bins a bin collection of the user .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102713852A

Filed: 2012-02-01     Issued: 2012-10-03

一种多核处理器系统

(Original Assignee) Huawei Technologies Co Ltd     (Current Assignee) Huawei Technologies Co Ltd

张卫国, 邬力波
US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information (连接一) associated with the producer worker , and datacenter queue information associated with the consumer worker .
CN102713852A
CLAIM 1
. 一种多核处理器系统 , 其特征在于 , 包括: 多个中央处理器单元以及多组第一级硬件消息队列;其中, 每一个中央处理器单元分别连接一 (datacenter queue information) 组第一级硬件消息队列,用于处理所述第一级硬件消息队列中的消息;其中,每一组第一级硬件消息队列包括多个第一级硬件消息队列;并且, 每一组第一级硬件消息队列中,优先级高的第一级硬件消息队列优先被调度,相同优先级的第一级硬件消息队列根据轮转调度权重被轮转调度。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information (连接一) associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
CN102713852A
CLAIM 1
. 一种多核处理器系统 , 其特征在于 , 包括: 多个中央处理器单元以及多组第一级硬件消息队列;其中, 每一个中央处理器单元分别连接一 (datacenter queue information) 组第一级硬件消息队列,用于处理所述第一级硬件消息队列中的消息;其中,每一组第一级硬件消息队列包括多个第一级硬件消息队列;并且, 每一组第一级硬件消息队列中,优先级高的第一级硬件消息队列优先被调度,相同优先级的第一级硬件消息队列根据轮转调度权重被轮转调度。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102668516A

Filed: 2011-12-02     Issued: 2012-09-12

一种云消息服务中实现消息传递的方法和装置

(Original Assignee) Huawei Technologies Co Ltd     (Current Assignee) Huawei Technologies Co Ltd

邓金波, 樊荣, 赵军
US9479472B2
CLAIM 1
. A method to locally process queue requests (包含的) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
CN102668516A
CLAIM 22
. 如权利要求21所述的云消息服务设备,其特征在于,所述消息队列管理単元根据所述消息队列的是否保序的值或所述第二分布式程序读取消息数据的请求中包含的 (queue requests) 參数判断所述第二分布式程序的消息传递是否需要顺序保证。

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (包含的) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN102668516A
CLAIM 22
. 如权利要求21所述的云消息服务设备,其特征在于,所述消息队列管理単元根据所述消息队列的是否保序的值或所述第二分布式程序读取消息数据的请求中包含的 (queue requests) 參数判断所述第二分布式程序的消息传递是否需要顺序保证。

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (消息传递) between the producer worker and the consumer worker .
CN102668516A
CLAIM 1
. ー种云消息服务中实现消息传递 (second message) 的方法,其特征在于,包括, 接收第一分布式程序发送的消息,在分布式Key-Value存储系统中存储所述消息携带的消息数据,并递增所述消息对应的消息队列的发送消息序列号; 接收第二分布式程序读取消息数据的请求,在所述分布式Key-Value存储系统中读取所述消息数据,将读取的所述消息数据发送给所述第二分布式程序,并在所述消息数据为所述消息队列的接收消息序列号对应的消息数据时,递增所述消息队列的接收消息序列号。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (包含的) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN102668516A
CLAIM 22
. 如权利要求21所述的云消息服务设备,其特征在于,所述消息队列管理単元根据所述消息队列的是否保序的值或所述第二分布式程序读取消息数据的请求中包含的 (queue requests) 參数判断所述第二分布式程序的消息传递是否需要顺序保证。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20120066177A1

Filed: 2011-11-15     Issued: 2012-03-15

Systems and Methods for Remote Deletion of Contact Information

(Original Assignee) AT&T Mobility II LLC     (Current Assignee) AT&T Mobility II LLC

Scott Swanburg, Andre Okada, Paul Hanson, Chris Young
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (desktop computer) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (desktop computer) ;

and modifying the message in response to receiving the signal .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (desktop computer) , deleting the message from the datacenter queue .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (message request) and the datacenter queue (desktop computer) associated with the message request .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (desktop computer) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (desktop computer) , delete the message from the first server .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (message request) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (desktop computer) associated with the message request .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (desktop computer) associated with the message .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (desktop computer) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (desktop computer) , and identify a message request (message request) that includes matching the consumer worker to the other datacenter queue .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (message request) .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (desktop computer) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (desktop computer) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (desktop computer) , and identify a message request (message request) that includes matching the consumer worker to the other datacenter queue .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (message request) and the datacenter queue (desktop computer) associated with the message request .
US20120066177A1
CLAIM 1
. A method , for providing a remote deletion function using synchronization , comprising : receiving a delete request message , at a network contact database , from a first device associated with a first user , the delete request message request (message request) ing deletion of contact information associated with the first user from a second device , the second device being associated with a second user and comprising a local contact database of the second device ;
deleting contact information corresponding to the first user from the network contact database ;
and initiating a synchronization process between the local contact database of the second device and the network contact database thereby deleting , from the local contact database of the second device , the contact information associated with the first user .

US20120066177A1
CLAIM 7
. The method of claim 1 , wherein the second device is a computer selected from a group of computers consisting of : a desktop computer (datacenter queue, datacenter queue information) associated with the second user ;
a laptop computer associated with the second user ;
and a tablet computer associated with the second user .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20130036427A1

Filed: 2011-08-03     Issued: 2013-02-07

Message queuing with flexible consistency options

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Han Chen, Minkyong Kim, Hui Lei, Fan Ye
US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (identifying one) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20130036427A1
CLAIM 7
. The method of claim 1 , wherein selecting the message comprises : analyzing a timestamp associated with the message ;
determining if the timestamp identifies a future point in time ;
responsive to the timestamp identifying a future point in time determining that the message is unavailable ;
and responsive to the timestamp identifying one (identifying one) of a current point in time and a past point in time , determining that the message is available .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (identifying one) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20130036427A1
CLAIM 7
. The method of claim 1 , wherein selecting the message comprises : analyzing a timestamp associated with the message ;
determining if the timestamp identifies a future point in time ;
responsive to the timestamp identifying a future point in time determining that the message is unavailable ;
and responsive to the timestamp identifying one (identifying one) of a current point in time and a past point in time , determining that the message is available .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20130007183A1

Filed: 2011-06-30     Issued: 2013-01-03

Methods And Apparatus For Remotely Updating Executing Processes

(Original Assignee) Amazon Technologies Inc     (Current Assignee) Amazon Technologies Inc

James Christopher Sorenson, III, Yun Lin, Ivan Brugiolo
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (one processor) prior to storing the message in the queue cache at the second server .
US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (one processor) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 7
. A computing device (computing device) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 8
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US9479472B2
CLAIM 9
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US9479472B2
CLAIM 10
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (one processor) associated with the message .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 11
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US9479472B2
CLAIM 12
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 13
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US9479472B2
CLAIM 14
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 15
. The computing device (computing device) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (one processor) and the consumer worker .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 16
. The computing device (computing device) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (one processor) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20130007183A1
CLAIM 1
. A method , comprising : detecting , by an update agent executing on a computing device (computing device) on a local network , that an update package for a current process executing on the computing device is available on a remote network , wherein the current process receives I/O requests including write requests from one or more client processes via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
downloading the update package to the computing device in response to said detecting ;
directing the current process to start an update sequence ;
storing , by the current process , a current configuration to an external store , wherein the current configuration includes at least an indication of the one or more I/O ports ;
instantiating an updated process on the computing device according to the downloaded update package ;
loading , by the updated process , the current configuration from the external store ;
flushing , by the current process , write data from the in-memory portion of the write log to the local data store ;
releasing , by the current process , the one or more I/O ports ;
and receiving , by the updated process , I/O requests including write requests from the one or more client processes via the one or more I/O ports .

US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20130007183A1
CLAIM 9
. A device , comprising : at least one processor (producer worker) ;
and a memory comprising program instructions , wherein the program instructions are executable by the at least one processor to implement a storage gateway process that receives I/O requests including write requests from one or more client processes on a local network via one or more I/O ports , appends write data indicated by the write requests to an in-memory portion of a write log on a local data store , and uploads write data from the write log to a remote data store ;
wherein the program instructions are further executable by the at least one processor to download an update package from a remote network , direct the storage gateway process to shut down , and instantiate an updated storage gateway process in the memory according to the downloaded update package ;
wherein , in response to receiving the direction to shut down , the storage gateway process persists a storage gateway configuration that includes an indication of the one or more I/O ports , flushes write data from the in-memory portion of the write log to the local data store , and releases the one or more I/O ports ;
and wherein the updated storage gateway process loads the persisted storage gateway configuration and , subsequent to the storage gateway process releasing the one or more I/O ports , takes over storage gateway operations from the storage gateway process .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20120117167A1

Filed: 2011-05-16     Issued: 2012-05-10

System and method for providing recommendations to a user in a viewing social network

(Original Assignee) Sony Corp     (Current Assignee) Sony Corp

Aran Sadja, Jeffrey Tang, Bryan Mihalov, Ludovic Douillet, Nobukazu Sugiyama
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more servers) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (more servers) ;

and modifying the message in response to receiving the signal .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (more user) from the datacenter queue (more servers) , deleting the message from the datacenter queue .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more user (delete command) s operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (more servers) associated with the message request .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more servers) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (more user) from the datacenter queue (more servers) , delete the message from the first server .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more user (delete command) s operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (more servers) associated with the message request .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (more servers) associated with the message .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (more servers) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (more servers) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more servers) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (more servers) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (more servers) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (more servers) associated with the message request .
US20120117167A1
CLAIM 11
. A system comprising : one or more servers (datacenter queue) communicatively coupled , each server further communicatively coupled to one or more users operating one or more local devices ;
wherein at least one of the one or more servers comprises one or more processors for performing steps comprising : initiating communication with a social networking server maintaining user information corresponding to a user , the user information for the user comprising media preferences for the user , one or more connections associated with the user , and media preferences for each of the one or more connections ;
retrieving at least a portion of the user information for the user from the social networking server ;
and generating a viewing recommendation for the user at least in part based on at least one of the media preferences of the user , and the media preferences of the one or more connections , the viewing recommendation comprising one or more multi-media content .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20120254876A1

Filed: 2011-03-31     Issued: 2012-10-04

Systems and methods for coordinating computing functions to accomplish a task

(Original Assignee) Honeywell International Inc     (Current Assignee) Honeywell International Inc

Douglas L. Bishop, Jeff Vanderzweep, Tim Felke, Douglas Allen Bell, Issa Aljanabi
US9479472B2
CLAIM 7
. A computing device (computing device) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (memory location) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US20120254876A1
CLAIM 20
. A method for coordinating functions of a computing device to accomplish a task , comprising : determining when an event queue stored on a memory device is empty ;
when the event queue is not empty , reading an event from the event queue ;
requesting a response record from a memory location (store instructions) based on the event ;
storing the response record in a response queue ;
when the event queue is empty , reading a response record from the response queue ;
and making a function call to an application identified in the response record .

US9479472B2
CLAIM 8
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US9479472B2
CLAIM 9
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US20120254876A1
CLAIM 4
. The system of claim 1 , wherein the workflow service module is configured to store (identify one) the event to the event queue and read the event from the event queue .

US9479472B2
CLAIM 10
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US9479472B2
CLAIM 11
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US9479472B2
CLAIM 12
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US9479472B2
CLAIM 13
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US9479472B2
CLAIM 14
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US9479472B2
CLAIM 15
. The computing device (computing device) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .

US9479472B2
CLAIM 16
. The computing device (computing device) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20120254876A1
CLAIM 1
. A system for coordinating functions within a computing device (computing device) to accomplish a task , comprising : a plurality of standardized executable application modules (SEAMs) , each SEAM configured to execute on a processor to provide a unique function and to generate an event associated with the unique function associated with each SEAM ;
a computer readable storage medium having a configuration file recorded thereon , the computer readable storage medium comprising : a dynamic data store (DDS) and a static data store (SDS) , wherein the DDS comprises an event queue and one or more response queues , and wherein the SDS comprises a persistent software object , the persistent software object configured to map a specific event from the event queue to a pre-defined response record , and to assign a response queue into which the pre-defined response record is to be placed ;
and a workflow service module , the work flow service module configured to direct communication between the SDS , the DDS and each of the plurality of SEAMs .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
KR20120100644A

Filed: 2011-03-04     Issued: 2012-09-12

공통 메시지 분배기 및 그 윈도우 메시지 전송 방법

(Original Assignee) 삼성탈레스 주식회사     

김용민
US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (컨텐츠) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information (제어부) , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
KR20120100644A
CLAIM 2
제 1 항에 있어서 상기 윈도우 메시지는 , 제 1 인자로 상기 윈도우 메시지의 전송 경로를 , 제 2 인자로 상기 윈도우 메시지의 종류를 , 제 3 인자로 상기 윈도우 메시지의 컨텐츠 (queue user table) 를 설정하는 윈도우 메시지 전송 방법 .

KR20120100644A
CLAIM 4
복수 개의 모듈에 연결되어 , 윈도우 메시지를 전송하는 공통 메시지 분배기에 있어서 , 상기 복수 개의 모듈로부터 윈도우 메시지를 수신하는 통신부 , 상기 윈도우 메시지와 , 상기 윈도우 메시지를 큐에 저장하는 콜백 함수 및 상기 윈도우 메시지를 해석하는 헤더 정보 해석 함수를 포함하는 DLL(dynamic link library) 모듈의 윈도우 핸들을 저장하는 저장부와 , 상기 저장부로부터 호출한 상기 콜백 함수와 상기 헤더 정보 해석 함수를 이용하여 상기 윈도우 메시지의 전송 목적지를 결정하며 , 상기 결정된 전송 목적지에 상기 윈도우 메시지를 송신하도록 상기 통신부를 제어하는 제어부 (consumer worker information) 를 포함하는 공통 메시지 분배기 .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (컨텐츠) based on the observed queue usage information .
KR20120100644A
CLAIM 2
제 1 항에 있어서 상기 윈도우 메시지는 , 제 1 인자로 상기 윈도우 메시지의 전송 경로를 , 제 2 인자로 상기 윈도우 메시지의 종류를 , 제 3 인자로 상기 윈도우 메시지의 컨텐츠 (queue user table) 를 설정하는 윈도우 메시지 전송 방법 .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (컨텐츠) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
KR20120100644A
CLAIM 2
제 1 항에 있어서 상기 윈도우 메시지는 , 제 1 인자로 상기 윈도우 메시지의 전송 경로를 , 제 2 인자로 상기 윈도우 메시지의 종류를 , 제 3 인자로 상기 윈도우 메시지의 컨텐츠 (queue user table) 를 설정하는 윈도우 메시지 전송 방법 .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (second message) between the producer worker and the consumer worker .
KR20120100644A
CLAIM 3
제 1 항에 있어서 , 상기 윈도우 메시지를 기술하는 함수는 , LRESULT SendMessageTimeout(HWND hWnd , //handleto window UINT Msg , //message WPARAM wParam , //first message parameter LPARAM lParam , //second message (second message) parameter UINT fuFlages , //send options UINT uTimeout , //time-out duration PDWORD PTR lpdwResult //return value for synchronous call) ;
인 것을 특징으로 하는 윈도우 메시지 전송 방법 .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (컨텐츠) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information (제어부) , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
KR20120100644A
CLAIM 2
제 1 항에 있어서 상기 윈도우 메시지는 , 제 1 인자로 상기 윈도우 메시지의 전송 경로를 , 제 2 인자로 상기 윈도우 메시지의 종류를 , 제 3 인자로 상기 윈도우 메시지의 컨텐츠 (queue user table) 를 설정하는 윈도우 메시지 전송 방법 .

KR20120100644A
CLAIM 4
복수 개의 모듈에 연결되어 , 윈도우 메시지를 전송하는 공통 메시지 분배기에 있어서 , 상기 복수 개의 모듈로부터 윈도우 메시지를 수신하는 통신부 , 상기 윈도우 메시지와 , 상기 윈도우 메시지를 큐에 저장하는 콜백 함수 및 상기 윈도우 메시지를 해석하는 헤더 정보 해석 함수를 포함하는 DLL(dynamic link library) 모듈의 윈도우 핸들을 저장하는 저장부와 , 상기 저장부로부터 호출한 상기 콜백 함수와 상기 헤더 정보 해석 함수를 이용하여 상기 윈도우 메시지의 전송 목적지를 결정하며 , 상기 결정된 전송 목적지에 상기 윈도우 메시지를 송신하도록 상기 통신부를 제어하는 제어부 (consumer worker information) 를 포함하는 공통 메시지 분배기 .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (컨텐츠) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
KR20120100644A
CLAIM 2
제 1 항에 있어서 상기 윈도우 메시지는 , 제 1 인자로 상기 윈도우 메시지의 전송 경로를 , 제 2 인자로 상기 윈도우 메시지의 종류를 , 제 3 인자로 상기 윈도우 메시지의 컨텐츠 (queue user table) 를 설정하는 윈도우 메시지 전송 방법 .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20110282948A1

Filed: 2010-05-17     Issued: 2011-11-17

Email tags

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Krishna Vitaldevara, Sanchan Sahai Saxena, Eliot C. Gillum, Rebecca Ping Zhu, Tyler J. Schnoebelen
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client devices) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20110282948A1
CLAIM 1
. A computer-implemented method , comprising : receiving email messages for distribution to client devices (second server) that correspond to respective recipients of the email messages ;
applying one or more email routing decisions to route an email message to an email folder for a recipient of the email message , the email folder including at least one of an email inbox , junk folder , or user-created folder ;
and tagging the email message with an email tag to generate a tagged email message , the email tag including a routing description that indicates why the email message was routed to the email folder .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (client devices) .
US20110282948A1
CLAIM 1
. A computer-implemented method , comprising : receiving email messages for distribution to client devices (second server) that correspond to respective recipients of the email messages ;
applying one or more email routing decisions to route an email message to an email folder for a recipient of the email message , the email folder including at least one of an email inbox , junk folder , or user-created folder ;
and tagging the email message with an email tag to generate a tagged email message , the email tag including a routing description that indicates why the email message was routed to the email folder .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client devices) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20110282948A1
CLAIM 1
. A computer-implemented method , comprising : receiving email messages for distribution to client devices (second server) that correspond to respective recipients of the email messages ;
applying one or more email routing decisions to route an email message to an email folder for a recipient of the email message , the email folder including at least one of an email inbox , junk folder , or user-created folder ;
and tagging the email message with an email tag to generate a tagged email message , the email tag including a routing description that indicates why the email message was routed to the email folder .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (email messages) between the producer worker and the consumer worker .
US20110282948A1
CLAIM 1
. A computer-implemented method , comprising : receiving email messages (second message) for distribution to client devices that correspond to respective recipients of the email messages ;
applying one or more email routing decisions to route an email message to an email folder for a recipient of the email message , the email folder including at least one of an email inbox , junk folder , or user-created folder ;
and tagging the email message with an email tag to generate a tagged email message , the email tag including a routing description that indicates why the email message was routed to the email folder .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client devices) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20110282948A1
CLAIM 1
. A computer-implemented method , comprising : receiving email messages for distribution to client devices (second server) that correspond to respective recipients of the email messages ;
applying one or more email routing decisions to route an email message to an email folder for a recipient of the email message , the email folder including at least one of an email inbox , junk folder , or user-created folder ;
and tagging the email message with an email tag to generate a tagged email message , the email tag including a routing description that indicates why the email message was routed to the email folder .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20110213991A1

Filed: 2010-02-26     Issued: 2011-09-01

Processor core communication in multi-core processor

(Original Assignee) Empire Technology Development LLC     (Current Assignee) Empire Technology Development LLC

Andrew Wolfe, Marc Elliot Levitt
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more processor cores) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (more processor cores) ;

and modifying the message in response to receiving the signal .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (more processor cores) , deleting the message from the datacenter queue .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (control signals) or more of : the consumer worker associated with the message request and the datacenter queue (more processor cores) associated with the message request .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US20110213991A1
CLAIM 5
. The multi-core processor of claim 1 , wherein the first set of processor cores and the second set of processor cores are configured to receive one or more control signals (identifying one) from one or more control blocks located in a periphery of the multi-core processor .

US9479472B2
CLAIM 7
. A computing device (computing device) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more processor cores) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 8
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (more processor cores) , delete the message from the first server .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 9
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (more processor cores) associated with the message request .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 10
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 11
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (more processor cores) associated with the message .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 12
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (more processor cores) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 13
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 14
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (more processor cores) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 15
. The computing device (computing device) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 16
. The computing device (computing device) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20110213991A1
CLAIM 18
. A computer-readable medium containing a sequence of instructions for managing communications in a multi-core processor that includes a plurality of processor cores having a first set of processor cores and a second set of processor cores , which when executed by a computing device (computing device) , causes the computing device to : issue a first command to idle communications with one or more of the plurality of processor cores in response to a clock frequency change request for the first set of processor cores ;
issue a second command to resume communications with one or more of the plurality of processor cores after having determined that a first phase lock loop operation associated with the first set of processor cores has acquired a first lock signal and a second phase lock loop operation associated with the second set of processor cores has also acquired a second lock signal .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more processor cores) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (more processor cores) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (more processor cores) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (control signals) or more of : the consumer worker associated with the message request and the datacenter queue (more processor cores) associated with the message request .
US20110213991A1
CLAIM 4
. The multi-core processor of claim 1 , wherein the interface block further comprises a synchronizer configured to synchronize the first clock signal and the second clock signal for communication between one or more processor cores (datacenter queue, datacenter queue information) of the first set of processor cores and one or more processor cores of the second set of processor cores .

US20110213991A1
CLAIM 5
. The multi-core processor of claim 1 , wherein the first set of processor cores and the second set of processor cores are configured to receive one or more control signals (identifying one) from one or more control blocks located in a periphery of the multi-core processor .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100185665A1

Filed: 2010-01-21     Issued: 2010-07-22

Office-Based Notification Messaging System

(Original Assignee) SUNSTEIN KANN MURPHY AND TIMBERS LLP     (Current Assignee) SUNSTEIN KANN MURPHY AND TIMBERS LLP

Monroe Horn, Rory A. Apperson, Andrei S. MacKenzie
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (message recipients) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (message recipients) prior to storing the message in the queue cache at the second server .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (message recipients) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (message recipients) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store (identify one) information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (message recipients) associated with the message .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (message recipients) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (message recipients) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (message recipients) and the consumer worker .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (message recipients) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (message recipients) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (message recipients) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (message recipients) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20100185665A1
CLAIM 16
. A computer-implemented office notification system , comprising : an office-based messaging system coupled to an office-based computer system and configured to receive user input for creating an office notification message , including a user-selected messaging system user group of message recipients (producer worker, determining matching producer worker) and an expiration ;
a check-in computer application configured to execute on the office-based computer system and prompt a user of the office-based computer system to check in upon computer system login ;
and a database coupled to the computer system and configured to store information about checked-in users ;
wherein the check-in computer application is configured to update the database as users check in ;
and wherein the office-based messaging system is configured to : compare the selected messaging system user group of message recipients to the information about checked-in users in the database ;
and send the created office notification message to members of the selected message system user group of message recipients who are checked in .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20110138400A1

Filed: 2009-12-03     Issued: 2011-06-09

Automated merger of logically associated messages in a message queue

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Allan T. Chandler, Bret W. Dixon
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (one processor) prior to storing the message in the queue cache at the second server .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (one processor) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (host computing platform) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20110138400A1
CLAIM 1
. A method for message merging in a messaging queue , the method comprising : receiving a request to add a new message to a message queue in a message queue manager executing in memory by a processor of a host computing platform (store instructions) ;
identifying an association key associating the new message with an existing message in the message queue ;
locating an associated message in the message corresponding to the identified association key ;
and , merging the new message with the located associated message in the message queue .

US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (one processor) associated with the message .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (one processor) and the consumer worker .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (one processor) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20110138400A1
CLAIM 5
. A message queueing data processing system comprising : a host computing platform comprising memory and at least one processor (producer worker) ;
a message queue coupled to the host computing platform ;
a message queue manager coupled to the message queue and executing by the processor in the memory of the host computing platform ;
and , a message merge module coupled to the message queue manager , the module comprising program code enabled upon execution while in memory by a processor of a computer to identify in a request to add a new message to the message queue received by the message queue manager , an association key associating the new message with an existing message in the message queue , to locate an associated message in the message queue corresponding to the identified association , and to merge the new message with the located associated message in the message queue .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN101668019A

Filed: 2009-09-30     Issued: 2010-03-10

网关确定方法、装置和消息发送方法、系统

(Original Assignee) ZTE Corp     

黄翔
US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (标识对应) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
CN101668019A
CLAIM 6
、 一种多々某体消息发送方法,其特征在于,包括:接收增值业务服务器生成多媒体消息后发送的多媒体消息网关查询消息;获取各多媒体消息网关的业务处理状况或资源占用状况,根据所述业务处理状况确定出当前业务处理能力最强的多媒体消息网关,或者根据所述资源占用状况确定出剩余资源最多的多媒体消息网关;将确定出的所述多媒体消息网关的对应标识发送给所述增值业务服务器,由所述增值业务服务器将所述多媒体消息发送至所述标识对应 (queue user table) 的多媒体消息网关。

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (标识对应) based on the observed queue usage information .
CN101668019A
CLAIM 6
、 一种多々某体消息发送方法,其特征在于,包括:接收增值业务服务器生成多媒体消息后发送的多媒体消息网关查询消息;获取各多媒体消息网关的业务处理状况或资源占用状况,根据所述业务处理状况确定出当前业务处理能力最强的多媒体消息网关,或者根据所述资源占用状况确定出剩余资源最多的多媒体消息网关;将确定出的所述多媒体消息网关的对应标识发送给所述增值业务服务器,由所述增值业务服务器将所述多媒体消息发送至所述标识对应 (queue user table) 的多媒体消息网关。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (标识对应) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN101668019A
CLAIM 6
、 一种多々某体消息发送方法,其特征在于,包括:接收增值业务服务器生成多媒体消息后发送的多媒体消息网关查询消息;获取各多媒体消息网关的业务处理状况或资源占用状况,根据所述业务处理状况确定出当前业务处理能力最强的多媒体消息网关,或者根据所述资源占用状况确定出剩余资源最多的多媒体消息网关;将确定出的所述多媒体消息网关的对应标识发送给所述增值业务服务器,由所述增值业务服务器将所述多媒体消息发送至所述标识对应 (queue user table) 的多媒体消息网关。

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (个多媒体消息) between the producer worker and the consumer worker .
CN101668019A
CLAIM 5
、 如权利要求4所述的方法,其特征在于,当至少两个多媒体消息 (second message) 网关的业务处理时延相同时,获取所述至少两个多媒体消息网关的所述剩余业务量或所述剩余吞吐量,选择所述剩余业务量或所述剩余吞吐量最大的多媒体消息网关;当至少两个多媒体消息网关的所述剩余业务量相同时,获取所述至少两个多媒体消息网关的所述业务处理时延,选择所述业务处理时延最短的多媒体消息网关;或者获取所述至少两个多媒体消息网关的剩余吞吐量,选择所述剩余吞吐量最大的多媒体消息网关;当至少两个多媒体消息网关的所述剩余吞吐量相同时,获取所述至少两个多媒体消息网关的所述业务处理时延,选择所述业务处理时延最短的多媒体消息网关;或者获取所述至少两个多々某体消息网关的剩余业务量,选择所述剩余业务量最大的多々某体消息网关。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (标识对应) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
CN101668019A
CLAIM 6
、 一种多々某体消息发送方法,其特征在于,包括:接收增值业务服务器生成多媒体消息后发送的多媒体消息网关查询消息;获取各多媒体消息网关的业务处理状况或资源占用状况,根据所述业务处理状况确定出当前业务处理能力最强的多媒体消息网关,或者根据所述资源占用状况确定出剩余资源最多的多媒体消息网关;将确定出的所述多媒体消息网关的对应标识发送给所述增值业务服务器,由所述增值业务服务器将所述多媒体消息发送至所述标识对应 (queue user table) 的多媒体消息网关。

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (标识对应) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN101668019A
CLAIM 6
、 一种多々某体消息发送方法,其特征在于,包括:接收增值业务服务器生成多媒体消息后发送的多媒体消息网关查询消息;获取各多媒体消息网关的业务处理状况或资源占用状况,根据所述业务处理状况确定出当前业务处理能力最强的多媒体消息网关,或者根据所述资源占用状况确定出剩余资源最多的多媒体消息网关;将确定出的所述多媒体消息网关的对应标识发送给所述增值业务服务器,由所述增值业务服务器将所述多媒体消息发送至所述标识对应 (queue user table) 的多媒体消息网关。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100191783A1

Filed: 2009-07-24     Issued: 2010-07-29

Method and system for interfacing to cloud storage

(Original Assignee) Nasuni Corp     (Current Assignee) Nasuni Corp

Robert S. Mason, Andres Rodriguez
US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (more user) from the datacenter queue , deleting the message from the datacenter queue .
US20100191783A1
CLAIM 13
. An apparatus for configuring one or more user (delete command) local file systems to interface to cloud storage , comprising : a processor ;
a computer-readable medium having stored thereon instructions that , when executed by the processor performs a configuration method , comprising : creating a volume in cloud storage for use in storing a series of structured data representations that represent versions of a user' ;
s local file system ;
associating to the volume a file system agent that executes in the user local file system , wherein the file system agent intercepts local file system data and generates the series of structured data representations ;
and identifying one or more storage service providers to host the volume .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (identifying one) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20100191783A1
CLAIM 13
. An apparatus for configuring one or more user local file systems to interface to cloud storage , comprising : a processor ;
a computer-readable medium having stored thereon instructions that , when executed by the processor performs a configuration method , comprising : creating a volume in cloud storage for use in storing a series of structured data representations that represent versions of a user' ;
s local file system ;
associating to the volume a file system agent that executes in the user local file system , wherein the file system agent intercepts local file system data and generates the series of structured data representations ;
and identifying one (identifying one) or more storage service providers to host the volume .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (more user) from the datacenter queue , delete the message from the first server .
US20100191783A1
CLAIM 13
. An apparatus for configuring one or more user (delete command) local file systems to interface to cloud storage , comprising : a processor ;
a computer-readable medium having stored thereon instructions that , when executed by the processor performs a configuration method , comprising : creating a volume in cloud storage for use in storing a series of structured data representations that represent versions of a user' ;
s local file system ;
associating to the volume a file system agent that executes in the user local file system , wherein the file system agent intercepts local file system data and generates the series of structured data representations ;
and identifying one or more storage service providers to host the volume .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (structured data) between the producer worker and the consumer worker .
US20100191783A1
CLAIM 1
. A computer-readable medium having stored thereon instructions that , when executed by a processor , perform a method associated with a local file system , the method comprising : intercepting local file system data traffic and generating , as metadata , a series of one or more structured data (second message) representations of the file system each corresponding to a version of the file system ;
caching at least first and second portions of the metadata and the local file system data represented by the metadata in association with the local file system ;
exporting the metadata and local file system data to one or more storage service providers ;
wherein the first portion cached represents metadata and local file system data that is to be written to the one or more storage service providers , and the second portion cached represents recently used local file system data .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (identifying one) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20100191783A1
CLAIM 13
. An apparatus for configuring one or more user local file systems to interface to cloud storage , comprising : a processor ;
a computer-readable medium having stored thereon instructions that , when executed by the processor performs a configuration method , comprising : creating a volume in cloud storage for use in storing a series of structured data representations that represent versions of a user' ;
s local file system ;
associating to the volume a file system agent that executes in the user local file system , wherein the file system agent intercepts local file system data and generates the series of structured data representations ;
and identifying one (identifying one) or more storage service providers to host the volume .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100010671A1

Filed: 2009-07-06     Issued: 2010-01-14

Information processing system, information processing method, robot control system, robot control method, and computer program

(Original Assignee) Sony Corp     (Current Assignee) Sony Corp

Atsushi Miyamoto
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (respective processes) at a first server , wherein the producer worker sends a message to a datacenter queue (different computer) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (reception information) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (different computer) ;

and modifying the message in response to receiving the signal .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (different computer) , deleting the message from the datacenter queue .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (reception information) and the datacenter queue (different computer) associated with the message request .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (respective processes) prior to storing the message in the queue cache at the second server .
US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (respective processes) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (respective processes) at a first server , wherein the producer worker sends a message to a datacenter queue (different computer) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (different computer) , delete the message from the first server .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (reception information) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (different computer) associated with the message request .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (respective processes) associated with the message .
US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (different computer) associated with the message .
US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (transmission source) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (respective processes) information , consumer worker information , datacenter queue (different computer) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20100010671A1
CLAIM 3
. The information processing system according to claim 2 , wherein the message broker has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source (queue user table) and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (transmission source) based on the observed queue usage information .
US20100010671A1
CLAIM 3
. The information processing system according to claim 2 , wherein the message broker has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source (queue user table) and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (respective processes) and consumer worker pairs through use of the queue user table (transmission source) through a process to : identify a message that includes matching the producer worker to another datacenter queue (different computer) , and identify a message request (reception information) that includes matching the consumer worker to the other datacenter queue .
US20100010671A1
CLAIM 3
. The information processing system according to claim 2 , wherein the message broker has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source (queue user table) and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US20100010671A1
CLAIM 18
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules to perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform , parallel processing are arranged in different processes , and in at least some of the processes , an order of synchronous multicast transmission is determined in accordance with process-order r (matching producer, determining matching producer worker) elationships determined on the basis of the message transmission/reception relationships among modules serving as destinations of the synchronous multicast transmission .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer (order r) and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (respective processes) and the consumer worker .
US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US20100010671A1
CLAIM 18
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules to perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform , parallel processing are arranged in different processes , and in at least some of the processes , an order of synchronous multicast transmission is determined in accordance with process-order r (matching producer, determining matching producer worker) elationships determined on the basis of the message transmission/reception relationships among modules serving as destinations of the synchronous multicast transmission .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (respective processes) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (reception information) .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (respective processes) at a first server , wherein the producer worker sends a message to a datacenter queue (different computer) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (reception information) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (transmission source) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (respective processes) information , consumer worker information , datacenter queue (different computer) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20100010671A1
CLAIM 3
. The information processing system according to claim 2 , wherein the message broker has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source (queue user table) and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (respective processes) and consumer worker pairs through use of the queue user table (transmission source) through a process to : identify a message that includes matching the producer worker to another datacenter queue (different computer) , and identify a message request (reception information) that includes matching the consumer worker to the other datacenter queue .
US20100010671A1
CLAIM 3
. The information processing system according to claim 2 , wherein the message broker has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source (queue user table) and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .

US20100010671A1
CLAIM 16
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules which should perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform parallel processing are arranged in different processes , message brokers are included in respective processes (producer worker) , each of the message broker being used for message exchanged between modules and which has a function of serialization in which a message is changed from an initial state to another state and deserialization in which the serialized message is deserialized , and when a message-transmission source and a message-reception source are included in an identical process , the serialization and the deserialization are omitted .

US20100010671A1
CLAIM 18
. A computer-readable program which includes a plurality of modules , which perform intermodule communication by means of message passing , and which is executed in a computer in a unit of process , wherein a group of modules to perform synchronous real-time processing is arranged in a single process , and modules which allow asynchronous processing and which should perform , parallel processing are arranged in different processes , and in at least some of the processes , an order of synchronous multicast transmission is determined in accordance with process-order r (matching producer, determining matching producer worker) elationships determined on the basis of the message transmission/reception relationships among modules serving as destinations of the synchronous multicast transmission .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (reception information) and the datacenter queue (different computer) associated with the message request .
US20100010671A1
CLAIM 8
. The information processing system according to claim 5 , further comprising : means for collecting message transmission/reception information (message request) regarding transmission modules and reception modules of messages in accordance with a function for obtaining a list of transmission messages and a function for obtaining a list of reception messages included in each , of the modules ;
and at configuration file which includes computer names which execute processes , module names included , in the processes , and message processing timings for the modules and which specifies a message having the process-order dependency , wherein the process-order-dependency obtaining means obtains the process-order-dependency relationship using the configuration file and the message transmission/reception information .

US20100010671A1
CLAIM 11
. The information processing system according to claim 1 , wherein the information processing system includes two or more computers , and among the modules which are to be asynchronously executed by the parallel processing means , especially modules to be executed in parallel are distributed in different processes to be executed in different computer (datacenter queue) s .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP2449849A1

Filed: 2009-06-29     Issued: 2012-05-09

Resource allocation

(Original Assignee) Nokia Oyj     (Current Assignee) Nokia Oyj

Harsh Jahagirdar
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (one processor) prior to storing the message in the queue cache at the second server .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (one processor) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 7
. A computing device (computing device) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application (more system settings) is configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 8
. The computing device (computing device) of claim 7 , wherein the VMM application (more system settings) is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

US9479472B2
CLAIM 9
. The computing device (computing device) of claim 7 , wherein the VMM application (more system settings) is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

US9479472B2
CLAIM 10
. The computing device (computing device) of claim 7 , wherein the VMM application (more system settings) is further configured to : observe network traffic through a network connection to identify the producer worker (one processor) associated with the message .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 11
. The computing device (computing device) of claim 7 , wherein the VMM application (more system settings) is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

US9479472B2
CLAIM 12
. The computing device (computing device) of claim 7 , wherein the VMM application (more system settings) is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 13
. The computing device (computing device) of claim 12 , wherein the VMM application (more system settings) is further configured to : update the queue user table based on the observed queue usage information .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

US9479472B2
CLAIM 14
. The computing device (computing device) of claim 12 , wherein the VMM application (more system settings) is further configured to : determine matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 15
. The computing device (computing device) of claim 14 , wherein the VMM application (more system settings) is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (one processor) and the consumer worker .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 16
. The computing device (computing device) of claim 15 , wherein the intercept module of the VMM application (more system settings) is configured to : intercept the message sent by the producer worker (one processor) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
EP2449849A1
CLAIM 1
. A method comprising : placing a message corresponding to a request , originating from a client , in a queue of messages ;
processing said message by allocating a resource of a computing device (computing device) to the corresponding client with reference to a system setting ;
maintaining a record of resources allocated to clients ;
and where said queue of messages comprises more than one message , prioritising the order in which said messages are processed .

EP2449849A1
CLAIM 6
. The method according to claim 4 wherein said change which may affect existing allocations of resources originates from a change to said one or more system settings (VMM application) .

EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
EP2449849A1
CLAIM 15
. Apparatus comprising : at least one processor (producer worker) ;
and at least one memory including computer program code ;
the at least one memory and the computer program code being configured to , working with the at least one processor , cause the apparatus to perform at least the following : cause a message , corresponding to a request originating from a client , to be placed in a queue of messages ;
cause said message to be processed by allocating a computing device resource to the corresponding client with reference to a system setting ;
cause a record of resources allocated to clients to be maintained ;
and where said queue of messages comprises more than one message , causing the order in which said messages are processed to be prioritised .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100161753A1

Filed: 2008-12-19     Issued: 2010-06-24

Method and communication device for processing data for transmission from the communication device to a second communication device

(Original Assignee) Research in Motion Ltd     (Current Assignee) BlackBerry Ltd

Gerhard Dietrich Klassen, Robert Edwards
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (instant messaging) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (instant messaging) and the datacenter queue associated with the message request .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (said memory) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20100161753A1
CLAIM 14
. The communication device of claim 11 , further comprising a memory in communication with said processing unit , said memory (store instructions) enabled to store said address in at least one of a database and a table in association with an identifier of said attachment , and said processing unit is further enabled to determine said address of said copy by processing at least one of said database and said table to retrieve said address via said identifier .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (instant messaging) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (instant messaging) that includes matching the consumer worker to the other datacenter queue .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (instant messaging) .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (instant messaging) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (instant messaging) that includes matching the consumer worker to the other datacenter queue .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (instant messaging) and the datacenter queue associated with the message request .
US20100161753A1
CLAIM 10
. The method of claim 1 , wherein said data comprises at least one of an e-mail , a text-message , a short message service message and an instant messaging (message request) message , and said attachment comprises at least one of image data , audio data , video data and document data .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20100107176A1

Filed: 2008-10-24     Issued: 2010-04-29

Maintenance of message serialization in multi-queue messaging environments

(Original Assignee) SAP SE     (Current Assignee) SAP SE

Joerg Kessler
US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (selection criteria) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20100107176A1
CLAIM 3
. The system of claim 2 , wherein the queue assignment handler is configured to select the plurality of source queues from among a larger pool of source queues based on an efficiency selection criteria (identifying one) associated with optimizing an increase in efficiency of processing of the messages resulting from the consolidation .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information (transmission time) associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20100107176A1
CLAIM 17
. A method comprising : providing messages to a source queue in serialized order , each message associated with a serialization context ;
buffering the messages in the source queue until a transmission time (datacenter queue information) is reached , in turn , for each buffered message ;
sending transmission-ready messages from the source queue according to the serialized order , using the serialization context , while continuing to store existing messages that are not yet transmission-ready ;
changing a queue assignment of the serialization context to a target queue ;
providing subsequent messages with the serialization context to the target queue for buffering therein , while continuing to send remaining transmission-ready messages from the source queue ;
determining that all of the existing messages from the source queue associated with the serialization context have been sent ;
and beginning to send the subsequent messages from the target queue in serialized order , using the serialization context .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (queue management) between the producer worker and the consumer worker .
US20100107176A1
CLAIM 6
. The system of claim 1 comprising : a view generator configured to generate a queue management (second message) user interface providing fields to receive queue designations of the source queue and the target queue from a user ;
and a request handler configured to receive the queue designations and forward the queue designations to the queue assignment handler .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information (transmission time) associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20100107176A1
CLAIM 17
. A method comprising : providing messages to a source queue in serialized order , each message associated with a serialization context ;
buffering the messages in the source queue until a transmission time (datacenter queue information) is reached , in turn , for each buffered message ;
sending transmission-ready messages from the source queue according to the serialized order , using the serialization context , while continuing to store existing messages that are not yet transmission-ready ;
changing a queue assignment of the serialization context to a target queue ;
providing subsequent messages with the serialization context to the target queue for buffering therein , while continuing to send remaining transmission-ready messages from the source queue ;
determining that all of the existing messages from the source queue associated with the serialization context have been sent ;
and beginning to send the subsequent messages from the target queue in serialized order , using the serialization context .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (selection criteria) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20100107176A1
CLAIM 3
. The system of claim 2 , wherein the queue assignment handler is configured to select the plurality of source queues from among a larger pool of source queues based on an efficiency selection criteria (identifying one) associated with optimizing an increase in efficiency of processing of the messages resulting from the consolidation .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2010020650A

Filed: 2008-07-14     Issued: 2010-01-28

情報処理システム及び情報処理方法、ロボットの制御システム及び制御方法、並びコンピュータ・プログラム

(Original Assignee) Sony Corp; ソニー株式会社     

Atsushi Miyamoto, 敦史 宮本
US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine (えること) ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
JP2010020650A
CLAIM 5
プロセス内におけるモジュール間の処理順序依存性を取得する処理順序依存性取得手段と、 プロセス内のあるモジュールから他の複数のモジュール宛ての同期マルチキャスト通信を行なう際に、前記処理順序依存性に基づく順序に従って各送信先モジュールに対してメッセージ送信を行なう同期マルチキャスト通信手段と、 をさらに備えること (first virtual machine) を特徴とする請求項1に記載の情報処理システム。

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module (受信モジュール) of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージの一覧を取得する関数及び受信メッセージの一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュール (intercept module) に関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module (受信モジュール) of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
JP2010020650A
CLAIM 8
各モジュールは、送信メッセージの一覧を取得する関数及び受信メッセージの一覧を取得する関数を含み、 モジュール毎の前記の送信メッセージの一覧を取得する関数及び受信メッセージの一覧を基に各メッセージの送信モジュール及び受信モジュール (intercept module) に関するメッセージ送受信情報を収集する手段と、 各プロセスを実行するコンピュータ名と、プロセス内に配置されるモジュール並びに各モジュールのメッセージ処理タイミングを記述するとともに、処理順序依存性のあるメッセージを指定した構成ファイルをさらに備え、 前記処理順序依存性取得手段は、前記構成ファイル及び前記メッセージ送受信情報を用いて、前記処理順序依存関係を取得する、 ことを特徴とする請求項5に記載の情報処理システム。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080270536A1

Filed: 2008-07-09     Issued: 2008-10-30

Document shadowing intranet server, memory medium and method

(Original Assignee) James Louis Keesey; Gerald Johann Wilmot     

James Louis Keesey, Gerald Johann Wilmot
US9479472B2
CLAIM 7
. A computing device (elapsed time) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (said memory) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20080270536A1
CLAIM 1
. A memory medium for controlling an intranet server that handles the requests of one or more downstream intranet servers or users for resources that are served by one or more web servers via an internet to said intranet server via an upstream intranet server , said memory (store instructions) medium comprising : (a) means for controlling said intranet server to maintain and update a usage count for each request from said downstream intranet servers or users for a first resource ;
(b) means for controlling said intranet server to maintain said first resource in a local memory of said intranet server if said usage count is equal to or greater than a threshold count value ;
(c) means for controlling said intranet server to send an inquiry to said upstream intranet server or to one of said web servers that is capable of serving said first resource , said inquiry identifying said first resource , whether said first resource is stored in said local memory , and the local version status of said first resource if so stored ;
(d) means for controlling said intranet server to receive a response to said inquiry , said response including a current version status of said first resource , a current version of said first resource if said current version is more recent than said local version or if said first resource is not stored in said local memory ;
(e) means for controlling said intranet server to store said current version of the first resource , when received , in said local memory if said usage count is equal to or greater than said threshold count value , and (f) means for controlling said intranet server to serve said first resource to one of said downstream intranet servers or users that is currently requesting said first resource .

US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 8
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 9
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 10
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 11
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 12
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 13
. The computing device (elapsed time) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 14
. The computing device (elapsed time) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 15
. The computing device (elapsed time) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .

US9479472B2
CLAIM 16
. The computing device (elapsed time) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20080270536A1
CLAIM 4
. The memory medium of claim 3 , and further comprising : (a) means for controlling said intranet server to maintain a second resource in said local memory without regard for frequency of usage or elapsed time (computing device) between requests for said second resource ;
and (b) means wherein said second resource is served , whenever received as new or revised from an upstream server or an intranet operator , to said downstream intranet server .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090254920A1

Filed: 2008-04-04     Issued: 2009-10-08

Extended dynamic optimization of connection establishment and message progress processing in a multi-fabric message passing interface implementation

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

Vladimir D. Truschin, Alexander V. Supalov, Alexey V. Ryzhykh
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (multi-core processor) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (multi-core processor) ;

and modifying the message in response to receiving the signal .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (multi-core processor) , deleting the message from the datacenter queue .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (multi-core processor) associated with the message request .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 7
. A computing device to provide local processing (first spin) of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (multi-core processor) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20090254920A1
CLAIM 3
. The article of claim 2 , further comprising instructions that when executed enable the system to enable the progress engine to establish a new connection and update the first variable , and to update a third variable associated with a first spin (local processing) count and a fourth variable associated with a second spin count .

US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (multi-core processor) , delete the message from the first server .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (multi-core processor) associated with the message request .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (multi-core processor) associated with the message .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (multi-core processor) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (multi-core processor) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (second message) between the producer worker and the consumer worker .
US20090254920A1
CLAIM 12
. The method of claim 11 , further comprising : thereafter decrementing the write in progress count if the active queue is empty ;
and otherwise sending a second message (second message) from the first process through the virtual channel .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (multi-core processor) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (multi-core processor) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (multi-core processor) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (multi-core processor) associated with the message request .
US20090254920A1
CLAIM 14
. A system comprising : a first node including at least one multi-core processor (datacenter queue) having a plurality of cores , wherein each core can execute a process ;
and a memory coupled to the at least one multi-core processor , wherein the memory includes instructions that enable the system to automatically determine a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests , and to prevent processing of new connection requests and data transfer requests outside of a predetermined communication pattern .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090022285A1

Filed: 2008-03-21     Issued: 2009-01-22

Dynamic Voicemail Receptionist System

(Original Assignee) AT&T Mobility II LLC     (Current Assignee) AT&T Mobility II LLC

Scott Swanburg, Andre Okada, Paul Hanson, Chris Young
US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (store instructions) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20090022285A1
CLAIM 1
. A voicemail receptionist system , comprising : a memory configured to store user data associated with at least one user and to store instructions (store instructions) for handling a communication ;
and a processor operably connected to the memory , the processor being configured to determine how to handle a communication based upon the user data and the instructions stored in the memory .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20090022285A1
CLAIM 1
. A voicemail receptionist system , comprising : a memory configured to store (identify one) user data associated with at least one user and to store instructions for handling a communication ;
and a processor operably connected to the memory , the processor being configured to determine how to handle a communication based upon the user data and the instructions stored in the memory .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090241118A1

Filed: 2008-03-20     Issued: 2009-09-24

System and method for processing interface requests in batch

(Original Assignee) American Express Travel Related Services Co Inc     (Current Assignee) Liberty Peak Ventures LLC

Krishna K. Lingamneni
US9479472B2
CLAIM 1
. A method to locally process queue requests (requesting application) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20090241118A1
CLAIM 1
. A method for facilitating a reply to a real-time request , the method including : managing the number of currently executing batch jobs ;
submitting the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application (queue requests) and stored into a request queue ;
receiving an output of the batch job ;
formatting a reply message corresponding to the request ;
and , storing the reply message in an accessible reply queue .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (requesting application) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20090241118A1
CLAIM 1
. A method for facilitating a reply to a real-time request , the method including : managing the number of currently executing batch jobs ;
submitting the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application (queue requests) and stored into a request queue ;
receiving an output of the batch job ;
formatting a reply message corresponding to the request ;
and , storing the reply message in an accessible reply queue .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module (general purpose computer) of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20090241118A1
CLAIM 19
. A computer-readable storage medium containing a set of instructions for a general purpose computer (intercept module) configured to : manage the currently executing batch jobs ;
submit the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application and stored into a request queue ;
receive an output of the batch job ;
format a reply message corresponding to the request ;
and , store the output in an accessible reply queue .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module (general purpose computer) of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20090241118A1
CLAIM 19
. A computer-readable storage medium containing a set of instructions for a general purpose computer (intercept module) configured to : manage the currently executing batch jobs ;
submit the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application and stored into a request queue ;
receive an output of the batch job ;
format a reply message corresponding to the request ;
and , store the output in an accessible reply queue .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (requesting application) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20090241118A1
CLAIM 1
. A method for facilitating a reply to a real-time request , the method including : managing the number of currently executing batch jobs ;
submitting the request as a batch job , wherein the batch job executes business logic corresponding at least in part to the request , wherein the request was received from a requesting application (queue requests) and stored into a request queue ;
receiving an output of the batch job ;
formatting a reply message corresponding to the request ;
and , storing the reply message in an accessible reply queue .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090234908A1

Filed: 2008-03-14     Issued: 2009-09-17

Data transmission queuing using fault prediction

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Marc D. Reyhner, Ian C. Jirka
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote computer) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20090234908A1
CLAIM 5
. The method of claim 1 , wherein the data is received from a remote computer (second server) server and wherein the data is forwarded over the communication channel to a destination system .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (second data) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20090234908A1
CLAIM 16
. The method of claim 15 , further comprising : identifying second data (identifying one) applicable to a second rule , allocating the second data to a second of the plurality of virtual queues , and communicating the second data over the communication channel .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (remote computer) .
US20090234908A1
CLAIM 5
. The method of claim 1 , wherein the data is received from a remote computer (second server) server and wherein the data is forwarded over the communication channel to a destination system .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote computer) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20090234908A1
CLAIM 5
. The method of claim 1 , wherein the data is received from a remote computer (second server) server and wherein the data is forwarded over the communication channel to a destination system .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information (common component) , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20090234908A1
CLAIM 13
. The method of claim 12 , wherein the first data value and the data are tagged for processing by a common component (consumer worker information) of a data receiving computer system .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (queue management) between the producer worker and the consumer worker .
US20090234908A1
CLAIM 19
. A computer system comprising : a data transmission queue including a plurality of virtual queues including a first virtual queue associated with a first fault group and a second virtual queue associated with a second fault group ;
a communication channel communicatively coupled to the data transmission queue ;
and a virtual queue management (second message) module to evaluate data to be communicated over the communication channel and to control assignment of the evaluated data with respect to at least one of the plurality of virtual queues within the data transmission queue .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote computer) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20090234908A1
CLAIM 5
. The method of claim 1 , wherein the data is received from a remote computer (second server) server and wherein the data is forwarded over the communication channel to a destination system .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information (common component) , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20090234908A1
CLAIM 13
. The method of claim 12 , wherein the first data value and the data are tagged for processing by a common component (consumer worker information) of a data receiving computer system .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (second data) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20090234908A1
CLAIM 16
. The method of claim 15 , further comprising : identifying second data (identifying one) applicable to a second rule , allocating the second data to a second of the plurality of virtual queues , and communicating the second data over the communication channel .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP1939743A2

Filed: 2007-11-26     Issued: 2008-07-02

Event correlation

(Original Assignee) SAP SE     (Current Assignee) SAP SE

Franz Weber
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (incoming messages) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (second data) or more of : the consumer worker associated with the message request (incoming messages) and the datacenter queue associated with the message request .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

EP1939743A2
CLAIM 9
A computer program product operable to cause data processing apparatus to perform operations comprising : buffering first data representing an event in a queue of second data (identifying one) representing a plurality of events , associating the first data with processing statistics , the processing statistics characterizing : whether a process instance is to process the first data ;
a number of process instances handling the first data ;
and a number of process instances that have processed the first data ;
and generating a process instance to process the first data if the first data is indicated as data for which a process instance is to be generated ;
and dequeueing the first data based on the processing statistics , the first data being dequeued if the processing statistics indicate that no process instances are handling the first data and the processing statistics indicate that no process instance is to process the first data , and , if the first data is to be processed by a threshold number of process instances , dequeueing the first data only if the processing statistics indicate that the threshold number of process instances have processed the first data .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (incoming messages) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (incoming messages) that includes matching the consumer worker to the other datacenter queue .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (incoming messages) between the producer worker and the consumer worker .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (incoming messages) .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (incoming messages) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (incoming messages) that includes matching the consumer worker to the other datacenter queue .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (second data) or more of : the consumer worker associated with the message request (incoming messages) and the datacenter queue associated with the message request .
EP1939743A2
CLAIM 1
A computer-implemented method comprising : buffering a message in a queue of incoming messages (second message, message request) , associating the message with processing statistics , the processing statistics characterizing : whether a process instance is to process content of the message ;
a number of process instances handling the message ;
and a number of process instances that have processed the content of the message ;
and generating a process instance to process the content of the message if the message is indicated as being a type message for which a process instance is to be generated ;
and dequeueing the message based on the processing statistics , the message being dequeued if the processing statistics indicate that no process instances are handling the message and the processing statistics indicate that no process instance is to process content of the message , and , if the content of the message is to be processed by a threshold number of process instances , dequeueing the message only if the processing statistics indicate that the threshold number of process instances have processed the content of the message .

EP1939743A2
CLAIM 9
A computer program product operable to cause data processing apparatus to perform operations comprising : buffering first data representing an event in a queue of second data (identifying one) representing a plurality of events , associating the first data with processing statistics , the processing statistics characterizing : whether a process instance is to process the first data ;
a number of process instances handling the first data ;
and a number of process instances that have processed the first data ;
and generating a process instance to process the first data if the first data is indicated as data for which a process instance is to be generated ;
and dequeueing the first data based on the processing statistics , the first data being dequeued if the processing statistics indicate that no process instances are handling the first data and the processing statistics indicate that no process instance is to process the first data , and , if the first data is to be processed by a threshold number of process instances , dequeueing the first data only if the processing statistics indicate that the threshold number of process instances have processed the first data .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20090113440A1

Filed: 2007-10-30     Issued: 2009-04-30

Multiple Queue Resource Manager

(Original Assignee) Raytheon Co     (Current Assignee) Raytheon Co

Jared B. Dorny
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (one processor) prior to storing the message in the queue cache at the second server .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (one processor) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US9479472B2
CLAIM 7
. A computing device (elapsed time) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 8
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 9
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 10
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : observe network traffic (priority level) through a network connection to identify the producer worker (one processor) associated with the message .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US20090113440A1
CLAIM 8
. The computing system of claim 1 , wherein each of the plurality of threads is operable to select another queue for processing based on a priority level (network traffic) .

US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 11
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : observe network traffic (priority level) through a network connection to detect the datacenter queue associated with the message .
US20090113440A1
CLAIM 8
. The computing system of claim 1 , wherein each of the plurality of threads is operable to select another queue for processing based on a priority level (network traffic) .

US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 12
. The computing device (elapsed time) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 13
. The computing device (elapsed time) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 14
. The computing device (elapsed time) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 15
. The computing device (elapsed time) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (one processor) and the consumer worker .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 16
. The computing device (elapsed time) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (one processor) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US20090113440A1
CLAIM 18
. The method of claim 17 , further comprising comparing an elapsed time (computing device) between messages received by the client and a specified time and when the elapsed time is equivalent to the specified time , processing , the second plurality of messages from the another one of the plurality of clients .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20090113440A1
CLAIM 1
. A computing system comprising : a multiple queue resource manager in communication with a plurality of clients and at least one processor (producer worker) configured in the computing system , the multiple queue resource manager operable to : create a plurality of queues for each of the plurality of clients , each of the plurality of queues operable to receive messages from its respective client ;
and create at least one thread that is coupled to the at least one processor , the at least one thread being configured to alternatively process a specified quantity of the messages from each of the plurality of queues .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080077939A1

Filed: 2007-07-31     Issued: 2008-03-27

Solution for modifying a queue manager to support smart aliasing which permits extensible software to execute against queued data without application modifications

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Richard Michael Harran, Stephen James Hobson, Peter Siddall
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (given operation) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20080077939A1
CLAIM 11
. A queue manager comprising : a smart alias function configured to associate a queue name with a plurality of different queues , wherein which one of the queues is associated with the queue name for a given operation (second server) is dependent upon programmatically determinable conditions , wherein the queue manager is configured to receives digitally encoded messages , to store the received digitally encoded messages , and to provide the digitally encoded messages to authorized requesting software applications , and wherein the queue manager and the smart alias function comprises a set of programmatic instructions stored in a machine readable medium , wherein said programmatic instructions are readable by a machine , which cause the machine to perform a set of operations for which the associated queue manager or smart alias function are configured .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (following steps) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20080077939A1
CLAIM 17
. A method of modifying a queue manager software program , comprising steps of : starting with a queue manager software program , which receives a message into a first queue , having a queue name , from an application which specifies the queue name to the queue manager software program , where the queue manager software program has an alias function which checks , for each message received from an application , whether the queue name specified by the application has a defined alias queue name , and if it does , then the queue manager software application places the message into a second queue having the defined alias queue name , and if it does not , then the queue manager software application places the message into the first queue ;
and modifying the alias function so that the alias function carries out the following steps (identifying one) , for each message received from an application ;
(a) if an application that is communicating with the queue manager software program is specifying the defined alias queue name , to the queue manager software program , and is providing a message to be stored onto a queue , then the queue manager software program places the message onto the first queue , and (b) if an application that is communicating with the queue manager software program is specifying the defined alias queue name , to the queue manager software program , and is receiving a message from a queue , then the queue manager software program receives the message from the second queue .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (given operation) .
US20080077939A1
CLAIM 11
. A queue manager comprising : a smart alias function configured to associate a queue name with a plurality of different queues , wherein which one of the queues is associated with the queue name for a given operation (second server) is dependent upon programmatically determinable conditions , wherein the queue manager is configured to receives digitally encoded messages , to store the received digitally encoded messages , and to provide the digitally encoded messages to authorized requesting software applications , and wherein the queue manager and the smart alias function comprises a set of programmatic instructions stored in a machine readable medium , wherein said programmatic instructions are readable by a machine , which cause the machine to perform a set of operations for which the associated queue manager or smart alias function are configured .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (given operation) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20080077939A1
CLAIM 11
. A queue manager comprising : a smart alias function configured to associate a queue name with a plurality of different queues , wherein which one of the queues is associated with the queue name for a given operation (second server) is dependent upon programmatically determinable conditions , wherein the queue manager is configured to receives digitally encoded messages , to store the received digitally encoded messages , and to provide the digitally encoded messages to authorized requesting software applications , and wherein the queue manager and the smart alias function comprises a set of programmatic instructions stored in a machine readable medium , wherein said programmatic instructions are readable by a machine , which cause the machine to perform a set of operations for which the associated queue manager or smart alias function are configured .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (given operation) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20080077939A1
CLAIM 11
. A queue manager comprising : a smart alias function configured to associate a queue name with a plurality of different queues , wherein which one of the queues is associated with the queue name for a given operation (second server) is dependent upon programmatically determinable conditions , wherein the queue manager is configured to receives digitally encoded messages , to store the received digitally encoded messages , and to provide the digitally encoded messages to authorized requesting software applications , and wherein the queue manager and the smart alias function comprises a set of programmatic instructions stored in a machine readable medium , wherein said programmatic instructions are readable by a machine , which cause the machine to perform a set of operations for which the associated queue manager or smart alias function are configured .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (following steps) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20080077939A1
CLAIM 17
. A method of modifying a queue manager software program , comprising steps of : starting with a queue manager software program , which receives a message into a first queue , having a queue name , from an application which specifies the queue name to the queue manager software program , where the queue manager software program has an alias function which checks , for each message received from an application , whether the queue name specified by the application has a defined alias queue name , and if it does , then the queue manager software application places the message into a second queue having the defined alias queue name , and if it does not , then the queue manager software application places the message into the first queue ;
and modifying the alias function so that the alias function carries out the following steps (identifying one) , for each message received from an application ;
(a) if an application that is communicating with the queue manager software program is specifying the defined alias queue name , to the queue manager software program , and is providing a message to be stored onto a queue , then the queue manager software program places the message onto the first queue , and (b) if an application that is communicating with the queue manager software program is specifying the defined alias queue name , to the queue manager software program , and is receiving a message from a queue , then the queue manager software program receives the message from the second queue .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070239838A1

Filed: 2007-04-09     Issued: 2007-10-11

Methods and systems for digital content sharing

(Original Assignee) Nokia Oyj; Twango Inc     (Current Assignee) Nokia Technologies Oy

James Laurel, Michael Laurel, Serena Glover, Don Kim, Philip Carmichael, Randall Kerr
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more servers) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (second email, first email) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (more servers) ;

and modifying the message in response to receiving the signal .
US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (more servers) , deleting the message from the datacenter queue .
US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (second email, first email) and the datacenter queue (more servers) associated with the message request .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more servers) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (more servers) , delete the message from the first server .
US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (second email, first email) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (more servers) associated with the message request .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (third parties) to identify the producer worker associated with the message .
US20070239838A1
CLAIM 35
. The system of claim 33 wherein the operations further comprise providing a selectable option to the first party user for each of the one or more channels to allow the first party user to elect one or more other third parties (network connection) to moderate a channel .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (third parties) to detect the datacenter queue (more servers) associated with the message .
US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US20070239838A1
CLAIM 35
. The system of claim 33 wherein the operations further comprise providing a selectable option to the first party user for each of the one or more channels to allow the first party user to elect one or more other third parties (network connection) to moderate a channel .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (more servers) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (more servers) , and identify a message request (second email, first email) that includes matching the consumer worker to the other datacenter queue .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (second email, first email) .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (more servers) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (second email, first email) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (more servers) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (more servers) , and identify a message request (second email, first email) that includes matching the consumer worker to the other datacenter queue .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (second email, first email) and the datacenter queue (more servers) associated with the message request .
US20070239838A1
CLAIM 17
. A method of sharing digital content through a network , the method comprising : providing a server communicatively coupled to the network and having a repository for use in storing electronic files ;
establishing channels at the server with which the electronic files can be associated ;
recognizing at the server a first email (message request) address and associating an electronic file transmitted to the first email address with a first channel , the first channel designating party access rights ;
and recognizing at the server a second email (message request) address and associating an electronic file transmitted to the second email address with the first channel , and wherein a first party can selectively change at least a portion of the second email address , and wherein the server can still thereafter recognize the changed second email address and automatically associate an electronic file transmitted to the changed second email address with the first channel .

US20070239838A1
CLAIM 18
. A system for sharing electronic files through one or more servers (datacenter queue) on a network , comprising : a server operable for receiving an electronic file transmitted over a network in association with an email sent to an Internet email address , the Internet email address having a format comprising at least one portion that includes a user identification usable by the server for identifying a party and at least another portion that is adjustable and can be changed without impairing the server' ;
s ability to identify the party as a function of the at least one portion of the Internet email address when the email is received by the server ;
a memory integral or coupled to the server for storing the electronic file ;
and a processor operable to save the electronic file to the memory upon receipt of the electronic file at the server .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080212602A1

Filed: 2007-03-01     Issued: 2008-09-04

Method, system and program product for optimizing communication and processing functions between disparate applications

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Alphana B. Hobbs, Daniel P. Huskey, Shirish S. Javalkar, Tuan A. Pham, William J. Reilly, Allen J. Scribner, Deirdre A. Wessel
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (second request) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request (first server) -format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (data elements) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements (identifying one) relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request-format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server (second request) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request (first server) -format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server (second request) .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request (first server) -format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server (second request) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request (first server) -format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (data elements) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20080212602A1
CLAIM 1
. A method of optimizing communication and processing functions between disparate applications , said method comprising the steps of : sending , from a first application to a second application , a request message of one or more request messages , said request message of said one or more request messages being formatted in a first request-format to provide a plurality of unique data elements (identifying one) relevant to processing said request message and having a reduced data size for optimizing communication ;
reformatting , by said second application , said request message received having said first request-format into a reformatted request message having a second request-format , said reformatted request message being forwarded to a third application ;
creating , by said third application , a response message having a first response-format , said response message being sent to said second application ;
and queuing , by a messaging application , each response message received from said second application into a response message collection corresponding to a message type , before sending said response group to said first application , wherein processing of said response message collection received by said first application is optimized .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080148281A1

Filed: 2006-12-14     Issued: 2008-06-19

RDMA (remote direct memory access) data transfer in a virtual environment

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

William R. Magro, Robert J. Woodruff, Jianxin Xiong
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (virtual machine monitor) at least partially stored at a second server (second virtual machine) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20080148281A1
CLAIM 1
. A method comprising : determining that a message has been placed in a send buffer ;
and transferring the message to an application on a second virtual machine (second server, second message) by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message .

US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (virtual machine monitor) ;

and modifying the message in response to receiving the signal .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (virtual machine monitor) , deleting the message from the datacenter queue .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (virtual machine monitor) associated with the message request .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (second virtual machine) .
US20080148281A1
CLAIM 1
. A method comprising : determining that a message has been placed in a send buffer ;
and transferring the message to an application on a second virtual machine (second server, second message) by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (remote direct memory access) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application (virtual machine monitor) is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (virtual machine monitor) at least partially stored at a second server (second virtual machine) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20080148281A1
CLAIM 1
. A method comprising : determining that a message has been placed in a send buffer ;
and transferring the message to an application on a second virtual machine (second server, second message) by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message .

US20080148281A1
CLAIM 4
. The method of claim 1 , wherein said determining and said transferring are performed by a virtual machine RDMA (remote direct memory access (store instructions) ) interface (VMRI) .

US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (virtual machine monitor) , delete the message from the first server .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (virtual machine monitor) associated with the message request .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : observe network traffic through a network connection to detect the datacenter queue (virtual machine monitor) associated with the message .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (virtual machine monitor) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application (virtual machine monitor) is further configured to : update the queue user table based on the observed queue usage information .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application (virtual machine monitor) is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (virtual machine monitor) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application (virtual machine monitor) is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (second virtual machine) between the producer worker and the consumer worker .
US20080148281A1
CLAIM 1
. A method comprising : determining that a message has been placed in a send buffer ;
and transferring the message to an application on a second virtual machine (second server, second message) by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message .

US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application (virtual machine monitor) is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (virtual machine monitor) at least partially stored at a second server (second virtual machine) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20080148281A1
CLAIM 1
. A method comprising : determining that a message has been placed in a send buffer ;
and transferring the message to an application on a second virtual machine (second server, second message) by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message .

US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (virtual machine monitor) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (virtual machine monitor) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (virtual machine monitor) associated with the message request .
US20080148281A1
CLAIM 11
. A system comprising : an SRAM (static random access memory) ;
and a virtual machine monitor (datacenter queue, VMM application) (VMM) coupled to the SRAM memory to : determine that a message has been placed in a send buffer of the SRAM ;
and transfer the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space of the SRAM from which the application can retrieve the message .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070198437A1

Filed: 2006-12-01     Issued: 2007-08-23

System and method for exchanging information among exchange applications

(Original Assignee) FireStar Software Inc     (Current Assignee) FireStar Software Inc

Mark Eisner, Gabriel Oancea
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (including information) associated with the message request and the datacenter queue associated with the message request .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine ;

and executing the consumer worker (including information) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker (including information) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (including information) executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US20070198437A1
CLAIM 18
. The gateway according to claim 17 , wherein the memory further comprises an instruction configured to store (identify one) configuration data in the data store in the gateway , the configuration data including information defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (including information) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker (including information) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (including information) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker (including information) in response to the message request .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (including information) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker (including information) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (including information) associated with the message request and the datacenter queue associated with the message request .
US20070198437A1
CLAIM 2
. The method according to claim 1 , further comprising storing configuration data in the data store in the gateway , the configuration data including information (consumer worker) defining the one or more simple transactions that can be performed by the gateway , wherein the at least one simple transaction is executed in accordance with the information defining one or more simple transactions in the configuration data stored in the data store .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP1955281A2

Filed: 2006-12-01     Issued: 2008-08-13

System and method for exchanging information among exchange applications

(Original Assignee) FireStar Software Inc     (Current Assignee) FireStar Software Inc

Mark Eisner, Gabriel Oancea
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote client application) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
EP1955281A2
CLAIM 1
. A system for communicating transaction information , comprising : a plurality of client application devices distributed among one or more local client application devices and one or more remote client application (second server) devices ;
and a plurality of gateways distributed among one or more local gateways and one or more remote gateways , wherein the one or more local gateways are configured to communicate the transaction information with the one or more local client application devices , with which the one or more local gateways are associated , using one or more local data formats , wherein the one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices , with which the one or more remote gateways are associated , using one or more remote data formats , wherein the one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways , wherein the one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats , and wherein the transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (XML documents) from the datacenter queue , deleting the message from the datacenter queue .
EP1955281A2
CLAIM 24
. The system of claim 23 , wherein standardized XML documents (delete command) generated from information contained in two or more gateways are combined into one standardized XML document that is used for non-repudiation of a business transaction .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (remote client application) .
EP1955281A2
CLAIM 1
. A system for communicating transaction information , comprising : a plurality of client application devices distributed among one or more local client application devices and one or more remote client application (second server) devices ;
and a plurality of gateways distributed among one or more local gateways and one or more remote gateways , wherein the one or more local gateways are configured to communicate the transaction information with the one or more local client application devices , with which the one or more local gateways are associated , using one or more local data formats , wherein the one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices , with which the one or more remote gateways are associated , using one or more remote data formats , wherein the one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways , wherein the one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats , and wherein the transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote client application) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
EP1955281A2
CLAIM 1
. A system for communicating transaction information , comprising : a plurality of client application devices distributed among one or more local client application devices and one or more remote client application (second server) devices ;
and a plurality of gateways distributed among one or more local gateways and one or more remote gateways , wherein the one or more local gateways are configured to communicate the transaction information with the one or more local client application devices , with which the one or more local gateways are associated , using one or more local data formats , wherein the one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices , with which the one or more remote gateways are associated , using one or more remote data formats , wherein the one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways , wherein the one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats , and wherein the transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (XML documents) from the datacenter queue , delete the message from the first server .
EP1955281A2
CLAIM 24
. The system of claim 23 , wherein standardized XML documents (delete command) generated from information contained in two or more gateways are combined into one standardized XML document that is used for non-repudiation of a business transaction .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
EP1955281A2
CLAIM 51
. The gateway of claim 28 , wherein the gateway is in communication with a data storage module , wherein the data storage module is configured to store (identify one) information transmitted and received by the gateway . I l l

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote client application) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
EP1955281A2
CLAIM 1
. A system for communicating transaction information , comprising : a plurality of client application devices distributed among one or more local client application devices and one or more remote client application (second server) devices ;
and a plurality of gateways distributed among one or more local gateways and one or more remote gateways , wherein the one or more local gateways are configured to communicate the transaction information with the one or more local client application devices , with which the one or more local gateways are associated , using one or more local data formats , wherein the one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices , with which the one or more remote gateways are associated , using one or more remote data formats , wherein the one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways , wherein the one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats , and wherein the transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070180150A1

Filed: 2006-12-01     Issued: 2007-08-02

System and method for exchanging information among exchange applications

(Original Assignee) FireStar Software Inc     (Current Assignee) FireStar Software Inc

Mark Eisner, Gabriel Oancea
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote client application) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070180150A1
CLAIM 1
. A system for communicating transaction information , comprising : a plurality of client application devices distributed among one or more local client application devices and one or more remote client application (second server) devices ;
and a plurality of gateways distributed among one or more local gateways and one or more remote gateways , wherein the one or more local gateways are configured to communicate the transaction information with the one or more local client application devices , with which the one or more local gateways are associated , using one or more local data formats , wherein the one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices , with which the one or more remote gateways are associated , using one or more remote data formats , wherein the one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways , wherein the one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats , and wherein the transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (XML documents) from the datacenter queue , deleting the message from the datacenter queue .
US20070180150A1
CLAIM 24
. The system of claim 23 , wherein standardized XML documents (delete command) generated from information contained in two or more gateways are combined into one standardized XML document that is used for non-repudiation of a business transaction .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (remote client application) .
US20070180150A1
CLAIM 1
. A system for communicating transaction information , comprising : a plurality of client application devices distributed among one or more local client application devices and one or more remote client application (second server) devices ;
and a plurality of gateways distributed among one or more local gateways and one or more remote gateways , wherein the one or more local gateways are configured to communicate the transaction information with the one or more local client application devices , with which the one or more local gateways are associated , using one or more local data formats , wherein the one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices , with which the one or more remote gateways are associated , using one or more remote data formats , wherein the one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways , wherein the one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats , and wherein the transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote client application) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070180150A1
CLAIM 1
. A system for communicating transaction information , comprising : a plurality of client application devices distributed among one or more local client application devices and one or more remote client application (second server) devices ;
and a plurality of gateways distributed among one or more local gateways and one or more remote gateways , wherein the one or more local gateways are configured to communicate the transaction information with the one or more local client application devices , with which the one or more local gateways are associated , using one or more local data formats , wherein the one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices , with which the one or more remote gateways are associated , using one or more remote data formats , wherein the one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways , wherein the one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats , and wherein the transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (XML documents) from the datacenter queue , delete the message from the first server .
US20070180150A1
CLAIM 24
. The system of claim 23 , wherein standardized XML documents (delete command) generated from information contained in two or more gateways are combined into one standardized XML document that is used for non-repudiation of a business transaction .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20070180150A1
CLAIM 51
. The gateway of claim 28 , wherein the gateway is in communication with a data storage module , wherein the data storage module is configured to store (identify one) information transmitted and received by the gateway .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote client application) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070180150A1
CLAIM 1
. A system for communicating transaction information , comprising : a plurality of client application devices distributed among one or more local client application devices and one or more remote client application (second server) devices ;
and a plurality of gateways distributed among one or more local gateways and one or more remote gateways , wherein the one or more local gateways are configured to communicate the transaction information with the one or more local client application devices , with which the one or more local gateways are associated , using one or more local data formats , wherein the one or more remote gateways are configured to communicate the transaction information with the one or more remote client application devices , with which the one or more remote gateways are associated , using one or more remote data formats , wherein the one or more local gateways are configured to transform the transaction information in the one or more local data formats into one or more common data formats that are shared with the one or more remote gateways , wherein the one or more remote gateways are configured to transform the transaction information in the one or more common data formats into the one or more remote data formats , and wherein the transaction information from the one or more local client application devices is communicated to the one or more remote client application devices for completing a transaction .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070168301A1

Filed: 2006-12-01     Issued: 2007-07-19

System and method for exchanging information among exchange applications

(Original Assignee) FireStar Software Inc     (Current Assignee) FireStar Software Inc

Mark Eisner, Gabriel Oancea
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (including information) associated with the message request and the datacenter queue associated with the message request .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine ;

and executing the consumer worker (including information) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker (including information) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (including information) executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US20070168301A1
CLAIM 26
. A gateway for performing message-based business processes among a plurality of applications , comprising : a data store configured to store (identify one) configuration data , the configuration data including information defining one or more simple transactions that can be performed by the gateway ;
an abstract queue configured to receive a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and at least one processing unit configured to execute at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (including information) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker (including information) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (including information) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker (including information) in response to the message request .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (including information) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker (including information) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (including information) associated with the message request and the datacenter queue associated with the message request .
US20070168301A1
CLAIM 1
. A method for performing message-based business processes among a plurality of applications , comprising : storing configuration data in a data store in a gateway , the configuration data including information (consumer worker) defining one or more simple transactions that can be performed by the gateway ;
receiving a gateway message at the gateway , the gateway message including a gateway message header and a payload , the gateway message header including a routing slip block providing a template of a complex transaction in which the gateway message is participating , the complex transaction comprising one or more simple transactions performed in a defined order ;
and executing at the gateway at least one simple transaction in accordance with the template in the routing slip and the configuration data defining the one or more simple transactions .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20080075015A1

Filed: 2006-09-22     Issued: 2008-03-27

Method for time-stamping messages

(Original Assignee) Nokia Oyj     (Current Assignee) Provenance Asset Group LLC ; Nokia USA Inc

Ossi Lindvall, Tomi Junnila
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (memory access controller) from the datacenter queue , deleting the message from the datacenter queue .
US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor or direct memory access controller (delete command) in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (one processor) prior to storing the message in the queue cache at the second server .
US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (one processor) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 7
. A computing device (computing device) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 8
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (memory access controller) from the datacenter queue , delete the message from the first server .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor or direct memory access controller (delete command) in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 9
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US9479472B2
CLAIM 10
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (one processor) associated with the message .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 11
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US9479472B2
CLAIM 12
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 13
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US9479472B2
CLAIM 14
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 15
. The computing device (computing device) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (one processor) and the consumer worker .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 16
. The computing device (computing device) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (one processor) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20080075015A1
CLAIM 6
. The method of claim 1 , wherein the calculating step is conducted in at least one of a trace interface module positioned on an application specific integrated circuit , an external trace device in communication with the application specific integrated circuit , or in a computing device (computing device) external to the application specific integrated circuit and being in communication with the external trace device and the trace interface module .

US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20080075015A1
CLAIM 11
. The system of claim 9 , wherein the application specific integrated circuit includes at least one processor (producer worker) or direct memory access controller in communication with the trace interface device via an operating system monitor .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070123280A1

Filed: 2006-07-12     Issued: 2007-05-31

System and method for providing mobile device services using SMS communications

(Original Assignee) Mcgary Faith; Ian Bacon; Michael Bates; Christine Baumeister     (Current Assignee) Grape Technology Group Inc

Faith McGary, Ian Bacon, Michael Bates, Christine Baumeister
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (embedded code) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070123280A1
CLAIM 1
. An enhanced services platform said platform comprising : an interface for receiving a communication from a user requesting a desired data ;
an automated response module for parsing said communication and retrieving said desired data , said enhanced services platform configured to arrange said desired data into a response message that is sent to said user , wherein said response message includes an embedded code (second server) corresponding to a link allowing said user to re-contact said enhanced services platform ;
and an operator assistance module configured to receive communications from said user initiated via said link to provide further assistance regarding said user' ;
s request .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (embedded code) .
US20070123280A1
CLAIM 1
. An enhanced services platform said platform comprising : an interface for receiving a communication from a user requesting a desired data ;
an automated response module for parsing said communication and retrieving said desired data , said enhanced services platform configured to arrange said desired data into a response message that is sent to said user , wherein said response message includes an embedded code (second server) corresponding to a link allowing said user to re-contact said enhanced services platform ;
and an operator assistance module configured to receive communications from said user initiated via said link to provide further assistance regarding said user' ;
s request .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (embedded code) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070123280A1
CLAIM 1
. An enhanced services platform said platform comprising : an interface for receiving a communication from a user requesting a desired data ;
an automated response module for parsing said communication and retrieving said desired data , said enhanced services platform configured to arrange said desired data into a response message that is sent to said user , wherein said response message includes an embedded code (second server) corresponding to a link allowing said user to re-contact said enhanced services platform ;
and an operator assistance module configured to receive communications from said user initiated via said link to provide further assistance regarding said user' ;
s request .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20070123280A1
CLAIM 11
. The enhanced services platform as claimed in claim 1 , further comprising a derivative user identifier database module , configured to store (identify one) information about frequent users of said enhanced services platform in a user profile .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (embedded code) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070123280A1
CLAIM 1
. An enhanced services platform said platform comprising : an interface for receiving a communication from a user requesting a desired data ;
an automated response module for parsing said communication and retrieving said desired data , said enhanced services platform configured to arrange said desired data into a response message that is sent to said user , wherein said response message includes an embedded code (second server) corresponding to a link allowing said user to re-contact said enhanced services platform ;
and an operator assistance module configured to receive communications from said user initiated via said link to provide further assistance regarding said user' ;
s request .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070005713A1

Filed: 2006-06-30     Issued: 2007-01-04

Secure electronic mail system

(Original Assignee) 0733660 BC Ltd (DBA E-MAIL2)     (Current Assignee) Appriver Canada Ulc

Thierry LeVasseur, Esteban Astudillo, Matt McLean, Derek Houg, Kung Chen, Jeremy Rasmussen
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (including information) associated with the message request and the datacenter queue associated with the message request .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine ;

and executing the consumer worker (including information) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker (including information) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (including information) executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (including information) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer (encryption method) worker and consumer worker (including information) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20070005713A1
CLAIM 11
. The method of claim 10 , wherein the server system implements multiple different secure e-mail services from which the sender can select , at least some of which use different e-mail encryption method (matching producer) s than others , and wherein the step of encrypting the e-mail message at the server system comprises using an encryption method that corresponds to the secure e-mail service selected by the sender .

US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer (encryption method) and consumer worker (including information) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20070005713A1
CLAIM 11
. The method of claim 10 , wherein the server system implements multiple different secure e-mail services from which the sender can select , at least some of which use different e-mail encryption method (matching producer) s than others , and wherein the step of encrypting the e-mail message at the server system comprises using an encryption method that corresponds to the secure e-mail service selected by the sender .

US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker (including information) in response to the message request .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (including information) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (including information) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer (encryption method) worker and consumer worker (including information) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20070005713A1
CLAIM 11
. The method of claim 10 , wherein the server system implements multiple different secure e-mail services from which the sender can select , at least some of which use different e-mail encryption method (matching producer) s than others , and wherein the step of encrypting the e-mail message at the server system comprises using an encryption method that corresponds to the secure e-mail service selected by the sender .

US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (including information) associated with the message request and the datacenter queue associated with the message request .
US20070005713A1
CLAIM 31
. The e-mail client plug-in of claim 30 , wherein the e-mail client plug-in is further configured to send a notification message to the recipient over a path that is does not include said secure e-mail service , said notification message including information (consumer worker) for retrieving the e-mail message from the secure e-mail service .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070113101A1

Filed: 2006-06-30     Issued: 2007-05-17

Secure electronic mail system with configurable cryptographic engine

(Original Assignee) 0733660 BC Ltd (DBA E-MAIL2)     (Current Assignee) Appriver Canada Ulc

Thierry LeVasseur, Esteban Astudillo, Matt McLean, Derek Houg, Kung Chen, Jeremy Rasmussen
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client component) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070113101A1
CLAIM 4
. The secure e-mail system of claim 3 , further comprising an e-mail client component (second server) that provides an option for a sender of an e-mail message to select from the plurality of secure e-mail services for sending the e-mail message .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (client component) .
US20070113101A1
CLAIM 4
. The secure e-mail system of claim 3 , further comprising an e-mail client component (second server) that provides an option for a sender of an e-mail message to select from the plurality of secure e-mail services for sending the e-mail message .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client component) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070113101A1
CLAIM 4
. The secure e-mail system of claim 3 , further comprising an e-mail client component (second server) that provides an option for a sender of an e-mail message to select from the plurality of secure e-mail services for sending the e-mail message .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20070113101A1
CLAIM 1
. A secure e-mail system , comprising : a server system configured to store (identify one) e-mail messages in an encrypted form , and providing functionality for addressees of the e-mail messages to retrieve corresponding e-mail messages ;
a cryptographic engine that encrypts the e-mail messages for storage on the server system , and decrypts the e-mail messages for delivery to the addressees ;
and an interface that provides functionality for an administrator to add an executable cryptographic method to the cryptographic engine , and to designate a particular executable cryptographic method to be used to encrypt/decrypt e-mail messages .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client component) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070113101A1
CLAIM 4
. The secure e-mail system of claim 3 , further comprising an e-mail client component (second server) that provides an option for a sender of an e-mail message to select from the plurality of secure e-mail services for sending the e-mail message .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070288931A1

Filed: 2006-05-25     Issued: 2007-12-13

Multi processor and multi thread safe message queue with hardware assistance

(Original Assignee) PortalPlayer Inc     (Current Assignee) Nvidia Corp

Gokhan Avkarogullari
US9479472B2
CLAIM 1
. A method to locally process queue requests (exchanging messages) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070288931A1
CLAIM 13
. A method for exchanging messages (queue requests) between a first software component running on a first computerized processor and a second software component running on a second computerized processor , wherein the first computerized processor and the second computerized processor have access to a shared memory , the method comprising : (a) attempting with the first software component to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
(b) determining whether there is space for the message token in a message queue in the shared memory , wherein said determining is triggered by said (a) having occurred and is performed atomically with respect to the software components ;
(c) if said (b) indicates that said space is available , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) incrementing a message counter ;
(2) writing said message token into said message queue at a location designated by a write pointer ;
and (3) changing said write pointer to point to a next location in said message queue ;
(d) attempting with the second software component to load said message token from a message queue read register ;
(e) determining whether said message token is new , thereby indicating whether there is at least one new message in the message queue , and wherein said determining is triggered by said (d) having occurred and is performed atomically with respect to the software components ;
(f) if said (e) indicates that the message is new , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) decrementing said message counter ;
(2) reading said message token from said message queue at a location designated by a read pointer ;
and (3) changing said read pointer to point to a next location in said message queue .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (exchanging messages) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070288931A1
CLAIM 13
. A method for exchanging messages (queue requests) between a first software component running on a first computerized processor and a second software component running on a second computerized processor , wherein the first computerized processor and the second computerized processor have access to a shared memory , the method comprising : (a) attempting with the first software component to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
(b) determining whether there is space for the message token in a message queue in the shared memory , wherein said determining is triggered by said (a) having occurred and is performed atomically with respect to the software components ;
(c) if said (b) indicates that said space is available , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) incrementing a message counter ;
(2) writing said message token into said message queue at a location designated by a write pointer ;
and (3) changing said write pointer to point to a next location in said message queue ;
(d) attempting with the second software component to load said message token from a message queue read register ;
(e) determining whether said message token is new , thereby indicating whether there is at least one new message in the message queue , and wherein said determining is triggered by said (d) having occurred and is performed atomically with respect to the software components ;
(f) if said (e) indicates that the message is new , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) decrementing said message counter ;
(2) reading said message token from said message queue at a location designated by a read pointer ;
and (3) changing said read pointer to point to a next location in said message queue .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (said determination) through a network connection to identify the producer worker associated with the message .
US20070288931A1
CLAIM 14
. A system for a first software component running on a first computerized processor to write a message to a shared memory that is accessible by a second software component running on a second computerized processor , comprising : load means for the first software component to attempt to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
a message queue management unit including : determination means for determining , atomically with respect to the software components , whether there is space for the message token in a message queue in the shared memory ;
and updating means responsive to said determination (network traffic) means for updating said message queue atomically with respect to the software components , wherein said updating means includes : means for incrementing a message counter ;
means for writing said message token into said message queue at a location designated by a write pointer ;
and means for changing said write pointer to point to a next location in said message queue .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (said determination) through a network connection to detect the datacenter queue associated with the message .
US20070288931A1
CLAIM 14
. A system for a first software component running on a first computerized processor to write a message to a shared memory that is accessible by a second software component running on a second computerized processor , comprising : load means for the first software component to attempt to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
a message queue management unit including : determination means for determining , atomically with respect to the software components , whether there is space for the message token in a message queue in the shared memory ;
and updating means responsive to said determination (network traffic) means for updating said message queue atomically with respect to the software components , wherein said updating means includes : means for incrementing a message counter ;
means for writing said message token into said message queue at a location designated by a write pointer ;
and means for changing said write pointer to point to a next location in said message queue .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (queue management) between the producer worker and the consumer worker .
US20070288931A1
CLAIM 14
. A system for a first software component running on a first computerized processor to write a message to a shared memory that is accessible by a second software component running on a second computerized processor , comprising : load means for the first software component to attempt to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
a message queue management (second message) unit including : determination means for determining , atomically with respect to the software components , whether there is space for the message token in a message queue in the shared memory ;
and updating means responsive to said determination means for updating said message queue atomically with respect to the software components , wherein said updating means includes : means for incrementing a message counter ;
means for writing said message token into said message queue at a location designated by a write pointer ;
and means for changing said write pointer to point to a next location in said message queue .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (exchanging messages) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070288931A1
CLAIM 13
. A method for exchanging messages (queue requests) between a first software component running on a first computerized processor and a second software component running on a second computerized processor , wherein the first computerized processor and the second computerized processor have access to a shared memory , the method comprising : (a) attempting with the first software component to load a message queue write register with a message token that is a pointer to the message or that is the message itself ;
(b) determining whether there is space for the message token in a message queue in the shared memory , wherein said determining is triggered by said (a) having occurred and is performed atomically with respect to the software components ;
(c) if said (b) indicates that said space is available , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) incrementing a message counter ;
(2) writing said message token into said message queue at a location designated by a write pointer ;
and (3) changing said write pointer to point to a next location in said message queue ;
(d) attempting with the second software component to load said message token from a message queue read register ;
(e) determining whether said message token is new , thereby indicating whether there is at least one new message in the message queue , and wherein said determining is triggered by said (d) having occurred and is performed atomically with respect to the software components ;
(f) if said (e) indicates that the message is new , updating said message queue , wherein said updating is also atomically with respect to the software components and includes : (1) decrementing said message counter ;
(2) reading said message token from said message queue at a location designated by a read pointer ;
and (3) changing said read pointer to point to a next location in said message queue .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070174398A1

Filed: 2006-01-25     Issued: 2007-07-26

Systems and methods for communicating logic in e-mail messages

(Original Assignee) StrongMail Systems Inc     (Current Assignee) Selligent Inc

Frank Addante
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (web service) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (web service) and the datacenter queue associated with the message request .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (database query) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070174398A1
CLAIM 13
. The method of claim 1 , wherein said logic for accessing the data comprises an SQL query statement that specifies the name of a database , a database account login , and a database query (store instructions) .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (web service) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (central processing) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20070174398A1
CLAIM 41
. A computer system for distributing one or more encoded messages to one or more recipients , the computer system comprising : a central processing (queue user table) unit ;
and a memory , coupled to the central processing unit , the memory storing a request processing module and a message transfer agent module , wherein the message processing module comprises instructions for : (A) receiving an electronic request using an e-mail protocol , wherein said electronic request includes : instructions for accessing one or more destinations corresponding to said one or more recipients ;
and logic for accessing data relating to recipients in said one or more recipients ;
(B) obtaining said data relating to recipients in said one or more recipients using said logic for accessing said data ;
(C) formatting , for each respective recipient in said one or more recipients , a message body of a message corresponding to said respective recipient using said data relating to said respective recipient obtained in step (B) thereby constructing said one or more encoded messages ;
and (D) distributing said one or more encoded messages to said one or more recipients using said one or more corresponding destinations .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (central processing) based on the observed queue usage information .
US20070174398A1
CLAIM 41
. A computer system for distributing one or more encoded messages to one or more recipients , the computer system comprising : a central processing (queue user table) unit ;
and a memory , coupled to the central processing unit , the memory storing a request processing module and a message transfer agent module , wherein the message processing module comprises instructions for : (A) receiving an electronic request using an e-mail protocol , wherein said electronic request includes : instructions for accessing one or more destinations corresponding to said one or more recipients ;
and logic for accessing data relating to recipients in said one or more recipients ;
(B) obtaining said data relating to recipients in said one or more recipients using said logic for accessing said data ;
(C) formatting , for each respective recipient in said one or more recipients , a message body of a message corresponding to said respective recipient using said data relating to said respective recipient obtained in step (B) thereby constructing said one or more encoded messages ;
and (D) distributing said one or more encoded messages to said one or more recipients using said one or more corresponding destinations .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (central processing) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (web service) that includes matching the consumer worker to the other datacenter queue .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US20070174398A1
CLAIM 41
. A computer system for distributing one or more encoded messages to one or more recipients , the computer system comprising : a central processing (queue user table) unit ;
and a memory , coupled to the central processing unit , the memory storing a request processing module and a message transfer agent module , wherein the message processing module comprises instructions for : (A) receiving an electronic request using an e-mail protocol , wherein said electronic request includes : instructions for accessing one or more destinations corresponding to said one or more recipients ;
and logic for accessing data relating to recipients in said one or more recipients ;
(B) obtaining said data relating to recipients in said one or more recipients using said logic for accessing said data ;
(C) formatting , for each respective recipient in said one or more recipients , a message body of a message corresponding to said respective recipient using said data relating to said respective recipient obtained in step (B) thereby constructing said one or more encoded messages ;
and (D) distributing said one or more encoded messages to said one or more recipients using said one or more corresponding destinations .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (web service) .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (web service) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (central processing) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20070174398A1
CLAIM 41
. A computer system for distributing one or more encoded messages to one or more recipients , the computer system comprising : a central processing (queue user table) unit ;
and a memory , coupled to the central processing unit , the memory storing a request processing module and a message transfer agent module , wherein the message processing module comprises instructions for : (A) receiving an electronic request using an e-mail protocol , wherein said electronic request includes : instructions for accessing one or more destinations corresponding to said one or more recipients ;
and logic for accessing data relating to recipients in said one or more recipients ;
(B) obtaining said data relating to recipients in said one or more recipients using said logic for accessing said data ;
(C) formatting , for each respective recipient in said one or more recipients , a message body of a message corresponding to said respective recipient using said data relating to said respective recipient obtained in step (B) thereby constructing said one or more encoded messages ;
and (D) distributing said one or more encoded messages to said one or more recipients using said one or more corresponding destinations .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (central processing) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (web service) that includes matching the consumer worker to the other datacenter queue .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .

US20070174398A1
CLAIM 41
. A computer system for distributing one or more encoded messages to one or more recipients , the computer system comprising : a central processing (queue user table) unit ;
and a memory , coupled to the central processing unit , the memory storing a request processing module and a message transfer agent module , wherein the message processing module comprises instructions for : (A) receiving an electronic request using an e-mail protocol , wherein said electronic request includes : instructions for accessing one or more destinations corresponding to said one or more recipients ;
and logic for accessing data relating to recipients in said one or more recipients ;
(B) obtaining said data relating to recipients in said one or more recipients using said logic for accessing said data ;
(C) formatting , for each respective recipient in said one or more recipients , a message body of a message corresponding to said respective recipient using said data relating to said respective recipient obtained in step (B) thereby constructing said one or more encoded messages ;
and (D) distributing said one or more encoded messages to said one or more recipients using said one or more corresponding destinations .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (web service) and the datacenter queue associated with the message request .
US20070174398A1
CLAIM 3
. The method of claim 2 , wherein the instructions for sending a request comprise an SQL query or web service (message request) request .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060146991A1

Filed: 2006-01-05     Issued: 2006-07-06

Provisioning and management in a message publish/subscribe system

(Original Assignee) Tervela Inc     (Current Assignee) Tervela Inc

J. Thompson, Kul Singh, Pierre Fraval
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (data message) at a first server (external authentication) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 40
. A messaging system as in claim 1 , wherein one or more of the provisioning and management systems are integrated with an external authentication (first server) and entitlement system .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (data message) prior to storing the message in the queue cache at the second server .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (data message) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US9479472B2
CLAIM 7
. A computing device (network bandwidth) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (data message) at a first server (external authentication) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US20060146991A1
CLAIM 40
. A messaging system as in claim 1 , wherein one or more of the provisioning and management systems are integrated with an external authentication (first server) and entitlement system .

US9479472B2
CLAIM 8
. The computing device (network bandwidth) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server (external authentication) .
US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US20060146991A1
CLAIM 40
. A messaging system as in claim 1 , wherein one or more of the provisioning and management systems are integrated with an external authentication (first server) and entitlement system .

US9479472B2
CLAIM 9
. The computing device (network bandwidth) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US9479472B2
CLAIM 10
. The computing device (network bandwidth) of claim 7 , wherein the VMM application is further configured to : observe network traffic (dynamic resource) through a network connection to identify the producer worker (data message) associated with the message .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US20060146991A1
CLAIM 25
. A messaging system as in claim 5 , wherein the dynamic selection of transmission protocol and message routing path is based on system topology , health and performance reports from the respective provisioning and management system and it involves one or both of dynamic resource (network traffic) allocation and dynamic channel creation and/or selection .

US9479472B2
CLAIM 11
. The computing device (network bandwidth) of claim 7 , wherein the VMM application is further configured to : observe network traffic (dynamic resource) through a network connection to detect the datacenter queue associated with the message .
US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US20060146991A1
CLAIM 25
. A messaging system as in claim 5 , wherein the dynamic selection of transmission protocol and message routing path is based on system topology , health and performance reports from the respective provisioning and management system and it involves one or both of dynamic resource (network traffic) allocation and dynamic channel creation and/or selection .

US9479472B2
CLAIM 12
. The computing device (network bandwidth) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (data message) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US9479472B2
CLAIM 13
. The computing device (network bandwidth) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US9479472B2
CLAIM 14
. The computing device (network bandwidth) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (data message) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US9479472B2
CLAIM 15
. The computing device (network bandwidth) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (data message) and the consumer worker .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US9479472B2
CLAIM 16
. The computing device (network bandwidth) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (data message) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 9
. A messaging system as in claim 1 , wherein the messaging system monitoring includes monitoring of performance metrics including network bandwidth (computing device) , message flow rates , frame rates , messaging hop latency , end-to-end latency , system behavior and protocol optimization services .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (data message) at a first server (external authentication) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US20060146991A1
CLAIM 40
. A messaging system as in claim 1 , wherein one or more of the provisioning and management systems are integrated with an external authentication (first server) and entitlement system .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (data message) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (data message) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20060146991A1
CLAIM 1
. A messaging system with provisioning and management , comprising : one or more than one messaging appliance operative for receiving and routing messages , including administrative and data message (producer worker) s ;
an interconnect ;
and one or more than one provisioning and management system linked to the one or more messaging appliances via the interconnect and operative to provide centralized , single-point management for the messaging system via communications of administrative messages , the single-point management including configuration management , messaging system monitoring and reporting .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070156834A1

Filed: 2005-12-29     Issued: 2007-07-05

Cursor component for messaging service

(Original Assignee) SAP SE     (Current Assignee) SAP SE

Radoslav Nikolov, Desislav Bantchovski, Stoyan Vellev
US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel (acknowledging receipt) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070156834A1
CLAIM 8
. The article of manufacture of claim 5 wherein said method further comprises flushing said highest priority message from said memory in response to said consumer acknowledging receipt (command channel) of said highest priority message .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (said memory) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel (acknowledging receipt) associated with the datacenter queue .
US20070156834A1
CLAIM 4
. The article of manufacture of claim 1 wherein said method further comprises , in response to a message arriving to said messaging service , said message being an only message needing delivery to said consumer at said message' ;
s priority level , replacing a NULL value in an entry of said table with a reference to said message in said memory (store instructions) , said entry reserved for said message' ;
s priority level .

US20070156834A1
CLAIM 8
. The article of manufacture of claim 5 wherein said method further comprises flushing said highest priority message from said memory in response to said consumer acknowledging receipt (command channel) of said highest priority message .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel (acknowledging receipt) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070156834A1
CLAIM 8
. The article of manufacture of claim 5 wherein said method further comprises flushing said highest priority message from said memory in response to said consumer acknowledging receipt (command channel) of said highest priority message .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060168070A1

Filed: 2005-12-23     Issued: 2006-07-27

Hardware-based messaging appliance

(Original Assignee) Tervela Inc     (Current Assignee) Tervela Inc

J. Thompson, Kul Singh, Pierre Fraval
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (data message) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (incoming messages) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (second groups) or more of : the consumer worker associated with the message request (incoming messages) and the datacenter queue associated with the message request .
US20060168070A1
CLAIM 1
. A hardware-based messaging appliance in a publish/subscribe middleware system , comprising : an interconnect bus ;
and hardware modules interconnected via the interconnect bus , the hardware modules being divided into groups , a first one being a control plane module group for handling messaging appliance management functions , a second one being a data plane module group for handling message routing functions alone or in addition to message transformation functions , and a third one being a service plane module group for handling service functions utilized by the first and second groups (identifying one) of hardware modules .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (data message) prior to storing the message in the queue cache at the second server .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (data message) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (data message) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (incoming messages) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (data plane) through a network connection (network connection) to identify the producer worker (data message) associated with the message .
US20060168070A1
CLAIM 1
. A hardware-based messaging appliance in a publish/subscribe middleware system , comprising : an interconnect bus ;
and hardware modules interconnected via the interconnect bus , the hardware modules being divided into groups , a first one being a control plane module group for handling messaging appliance management functions , a second one being a data plane (network traffic) module group for handling message routing functions alone or in addition to message transformation functions , and a third one being a service plane module group for handling service functions utilized by the first and second groups of hardware modules .

US20060168070A1
CLAIM 9
. A hardware-based messaging appliance as in claim 6 , wherein each logical configuration path is one of a plurality of paths , a first path being established via a command line interface (CLI) over a serial interface or a network connection (network connection) , and a second path being established by administrative messages routed through the publish/subscribe middleware system .

US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (data plane) through a network connection (network connection) to detect the datacenter queue associated with the message .
US20060168070A1
CLAIM 1
. A hardware-based messaging appliance in a publish/subscribe middleware system , comprising : an interconnect bus ;
and hardware modules interconnected via the interconnect bus , the hardware modules being divided into groups , a first one being a control plane module group for handling messaging appliance management functions , a second one being a data plane (network traffic) module group for handling message routing functions alone or in addition to message transformation functions , and a third one being a service plane module group for handling service functions utilized by the first and second groups of hardware modules .

US20060168070A1
CLAIM 9
. A hardware-based messaging appliance as in claim 6 , wherein each logical configuration path is one of a plurality of paths , a first path being established via a command line interface (CLI) over a serial interface or a network connection (network connection) , and a second path being established by administrative messages routed through the publish/subscribe middleware system .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (central processing) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (data message) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20060168070A1
CLAIM 7
. A hardware-based messaging appliance as in claim 6 , wherein the management module incorporates one or more central processing (queue user table) units (CPUs) in a computer , a blade server or a host server .

US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (central processing) based on the observed queue usage information .
US20060168070A1
CLAIM 7
. A hardware-based messaging appliance as in claim 6 , wherein the management module incorporates one or more central processing (queue user table) units (CPUs) in a computer , a blade server or a host server .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (data message) and consumer worker pairs through use of the queue user table (central processing) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (incoming messages) that includes matching the consumer worker to the other datacenter queue .
US20060168070A1
CLAIM 7
. A hardware-based messaging appliance as in claim 6 , wherein the management module incorporates one or more central processing (queue user table) units (CPUs) in a computer , a blade server or a host server .

US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (incoming messages) between the producer worker (data message) and the consumer worker .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (data message) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (incoming messages) .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (data message) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (incoming messages) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (central processing) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (data message) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20060168070A1
CLAIM 7
. A hardware-based messaging appliance as in claim 6 , wherein the management module incorporates one or more central processing (queue user table) units (CPUs) in a computer , a blade server or a host server .

US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (data message) and consumer worker pairs through use of the queue user table (central processing) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (incoming messages) that includes matching the consumer worker to the other datacenter queue .
US20060168070A1
CLAIM 7
. A hardware-based messaging appliance as in claim 6 , wherein the management module incorporates one or more central processing (queue user table) units (CPUs) in a computer , a blade server or a host server .

US20060168070A1
CLAIM 18
. A hardware-based messaging appliance as in claim 9 , further including physical interfaces one or more of which being dedicated for handling administrative message traffic associated with the messaging appliance management functions and the remaining physical interfaces are available for data message (producer worker) traffic , such that administrative message traffic is not commingled with and overloading the physical interfaces for data message traffic .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (second groups) or more of : the consumer worker associated with the message request (incoming messages) and the datacenter queue associated with the message request .
US20060168070A1
CLAIM 1
. A hardware-based messaging appliance in a publish/subscribe middleware system , comprising : an interconnect bus ;
and hardware modules interconnected via the interconnect bus , the hardware modules being divided into groups , a first one being a control plane module group for handling messaging appliance management functions , a second one being a data plane module group for handling message routing functions alone or in addition to message transformation functions , and a third one being a service plane module group for handling service functions utilized by the first and second groups (identifying one) of hardware modules .

US20060168070A1
CLAIM 47
. A system as in claim 46 , wherein each edge messaging appliance includes a protocol transformation engine for transforming incoming messages (second message, message request) from an external protocol to a native protocol and for transforming routed messages from the native protocol to the external protocol .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060146999A1

Filed: 2005-12-23     Issued: 2006-07-06

Caching engine in a messaging system

(Original Assignee) Tervela Inc     (Current Assignee) Tervela Inc

J. Thompson, Kul Singh, Pierre Fraval
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (complete message) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20060146999A1
CLAIM 36
. A method for providing quality of service with a caching engine as in claim 35 , wherein each data message has an associated topic , wherein the indexing service maintains a master image of each complete data message and , for a received data message that is a partially complete message (second server) , the indexing service compares the received data message against a most recent master image of a complete message with an associated topic similar to that of the partially-published message to determine how the master image should be updated .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (complete message) .
US20060146999A1
CLAIM 36
. A method for providing quality of service with a caching engine as in claim 35 , wherein each data message has an associated topic , wherein the indexing service maintains a master image of each complete data message and , for a received data message that is a partially complete message (second server) , the indexing service compares the received data message against a most recent master image of a complete message with an associated topic similar to that of the partially-published message to determine how the master image should be updated .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (complete message) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20060146999A1
CLAIM 36
. A method for providing quality of service with a caching engine as in claim 35 , wherein each data message has an associated topic , wherein the indexing service maintains a master image of each complete data message and , for a received data message that is a partially complete message (second server) , the indexing service compares the received data message against a most recent master image of a complete message with an associated topic similar to that of the partially-published message to determine how the master image should be updated .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (complete message) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20060146999A1
CLAIM 36
. A method for providing quality of service with a caching engine as in claim 35 , wherein each data message has an associated topic , wherein the indexing service maintains a master image of each complete data message and , for a received data message that is a partially complete message (second server) , the indexing service compares the received data message against a most recent master image of a complete message with an associated topic similar to that of the partially-published message to determine how the master image should be updated .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US7624250B2

Filed: 2005-12-05     Issued: 2009-11-24

Heterogeneous multi-core processor having dedicated connections between processor cores

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

Sinn Wee Lau, Choon Yee Loh, Kar Meng Chan
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (same instruction) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (same instruction) ;

and modifying the message in response to receiving the signal .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (same instruction) , deleting the message from the datacenter queue .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (same instruction) associated with the message request .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 7
. A computing device to provide local processing (multiple register) of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (same instruction) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US7624250B2
CLAIM 1
. A processor , comprising : multiple cores integrated on a single semiconductor die , the multiple cores including : a first set of processor cores integrated on the single semiconductor die having the same functional operationality ;
and a second set of at least one processor core integrated on the single semiconductor die having a different functional operationality than a processor core of the first set of processor cores ;
a chain of multiple dedicated unidirectional connections spanning the first and second set of processor cores , at least one of the multiple dedicated unidirectional connections being between a one of the first set of processor cores and a one of the second set of processor cores , the multiple dedicated unidirectional connections terminating in registers within the respective processor cores ;
wherein the registers in a one of the second set of processor cores comprises multiple register (local processing) s that a one of the first set of processor cores accesses as a circular ring queue ;
wherein the second set of processor cores includes at least one ring register ;
and wherein the ring register is undated when an operation is performed on the circular ring .

US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (same instruction) , delete the message from the first server .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (same instruction) associated with the message request .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (same instruction) associated with the message .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (same instruction) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (same instruction) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (same instruction) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (same instruction) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (same instruction) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (same instruction) associated with the message request .
US7624250B2
CLAIM 4
. The processor of claim 1 , wherein each processor core in the first set of processor cores comprises a processor core having the same instruction (datacenter queue) set ;
and wherein the second set of at least one processor cores comprises a set of at least one processor cores having a different instruction set than processor cores in the first set of processor cores .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070094664A1

Filed: 2005-10-21     Issued: 2007-04-26

Programmable priority for concurrent multi-threaded processors

(Original Assignee) Broadcom Corp     (Current Assignee) Avago Technologies General IP Singapore Pte Ltd

Kimming So, BaoBinh Truong, Yang Lu, Hon-Chong Ho, Li-Hung Chang, Chia-Cheng Choung, Jason Leonard
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (second request) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache (cache line) at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070094664A1
CLAIM 9
. The method of claim 1 wherein prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : receiving , at a shared hardware resource , a first request from the first thread processor and a second request (first server) from the second processor ;
accessing the priority information in the control register ;
and providing access to the shared hardware resource to the first thread processor , based on the priority information .

US20070094664A1
CLAIM 11
. The method of claim 1 prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : restricting the second processor to re-fill a cache line (queue cache) only in an assigned portion of a cache during the second process .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache (cache line) at the second server .
US20070094664A1
CLAIM 11
. The method of claim 1 prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : restricting the second processor to re-fill a cache line (queue cache) only in an assigned portion of a cache during the second process .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (main memory) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server (second request) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache (cache line) at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070094664A1
CLAIM 9
. The method of claim 1 wherein prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : receiving , at a shared hardware resource , a first request from the first thread processor and a second request (first server) from the second processor ;
accessing the priority information in the control register ;
and providing access to the shared hardware resource to the first thread processor , based on the priority information .

US20070094664A1
CLAIM 11
. The method of claim 1 prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : restricting the second processor to re-fill a cache line (queue cache) only in an assigned portion of a cache during the second process .

US20070094664A1
CLAIM 17
. The apparatus of claim 14 wherein the shared hardware resource includes one or more of a cache , a main memory (store instructions) , a buffer , a queue , an interconnect , an interface , a shared memory , a bus , a memory controller , or a shared device .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server (second request) .
US20070094664A1
CLAIM 9
. The method of claim 1 wherein prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : receiving , at a shared hardware resource , a first request from the first thread processor and a second request (first server) from the second processor ;
accessing the priority information in the control register ;
and providing access to the shared hardware resource to the first thread processor , based on the priority information .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (priority level) through a network connection to identify the producer worker associated with the message .
US20070094664A1
CLAIM 5
. The method of claim 1 wherein setting priority information in a control register comprises : setting a priority level (network traffic) in the control register indicating an extent to which the first thread processor is prioritized in executing the first process , relative to the second thread processor in executing the second process .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (priority level) through a network connection to detect the datacenter queue associated with the message .
US20070094664A1
CLAIM 5
. The method of claim 1 wherein setting priority information in a control register comprises : setting a priority level (network traffic) in the control register indicating an extent to which the first thread processor is prioritized in executing the first process , relative to the second thread processor in executing the second process .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache (cache line) ;

and provide the intercepted message to the consumer worker in response to the message request .
US20070094664A1
CLAIM 11
. The method of claim 1 prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : restricting the second processor to re-fill a cache line (queue cache) only in an assigned portion of a cache during the second process .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server (second request) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache (cache line) at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070094664A1
CLAIM 9
. The method of claim 1 wherein prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : receiving , at a shared hardware resource , a first request from the first thread processor and a second request (first server) from the second processor ;
accessing the priority information in the control register ;
and providing access to the shared hardware resource to the first thread processor , based on the priority information .

US20070094664A1
CLAIM 11
. The method of claim 1 prioritizing the first processor in performing the first process relative to the second processor in performing the second process comprises : restricting the second processor to re-fill a cache line (queue cache) only in an assigned portion of a cache during the second process .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060031568A1

Filed: 2005-10-12     Issued: 2006-02-09

Adaptive flow control protocol

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Vadim Eydelman, Khawar Zuberi
US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (following steps) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20060031568A1
CLAIM 3
. The system as recited in claim 2 , wherein the transport provider detect if the receiving application posts the receive buffer prior to posting the send by performing the following steps (identifying one) : determining if the receiving application posts a large receive buffer ;
determining if the sending application performs a send causing the receive posted by the receiving application to complete ;
and determining if the receiving application performs a small send .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (following steps) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20060031568A1
CLAIM 3
. The system as recited in claim 2 , wherein the transport provider detect if the receiving application posts the receive buffer prior to posting the send by performing the following steps (identifying one) : determining if the receiving application posts a large receive buffer ;
determining if the sending application performs a send causing the receive posted by the receiving application to complete ;
and determining if the receiving application performs a small send .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070168567A1

Filed: 2005-08-31     Issued: 2007-07-19

System and method for file based I/O directly between an application instance and an I/O adapter

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

William Boyd, John Hufferd, Agustin Mena, Renato Recio, Madeline Vega
US9479472B2
CLAIM 1
. A method to locally process queue requests (system memory, I/O request) from co-located workers in a datacenter , the method comprising : detecting a producer worker (storage location) at a first server , wherein the producer worker sends a message to a datacenter queue (system memory, I/O request) at least partially stored at a second server ;

storing the message in a queue cache (system memory, I/O request) at the first server ;

detecting a consumer worker (storage location) at the first server , wherein the consumer worker sends a message request (start address) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (system memory, I/O request) ;

and modifying the message in response to receiving the signal .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (system memory, I/O request) , deleting the message from the datacenter queue .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (storage location) associated with the message request (start address) and the datacenter queue (system memory, I/O request) associated with the message request .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (storage location) prior to storing the message in the queue cache (system memory, I/O request) at the second server .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (storage location) on a first virtual machine ;

and executing the consumer worker (storage location) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (system memory, I/O request) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (storage location) at a first server , wherein the producer worker sends a message to a datacenter queue (system memory, I/O request) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache (system memory, I/O request) at the first server ;

detect a consumer worker (storage location) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (system memory, I/O request) , delete the message from the first server .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (start address) sent from the consumer worker (storage location) executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (system memory, I/O request) associated with the message request .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (storage location) associated with the message .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (system memory, I/O request) associated with the message .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (storage location) information , consumer worker (storage location) information , datacenter queue (system memory, I/O request) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (storage location) and consumer worker (storage location) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (system memory, I/O request) , and identify a message request (start address) that includes matching the consumer worker to the other datacenter queue .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (storage location) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (storage location) and the consumer worker .
US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (storage location) ;

store the message in the queue cache (system memory, I/O request) ;

and provide the intercepted message to the consumer worker (storage location) in response to the message request (start address) .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (system memory, I/O request) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (storage location) at a first server , wherein the producer worker sends a message to a datacenter queue (system memory, I/O request) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache (system memory, I/O request) at the first server ;

detecting a consumer worker (storage location) at the first server , wherein the consumer worker sends a message request (start address) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (storage location) information , consumer worker (storage location) information , datacenter queue (system memory, I/O request) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (storage location) and consumer worker (storage location) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (system memory, I/O request) , and identify a message request (start address) that includes matching the consumer worker to the other datacenter queue .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (storage location) associated with the message request (start address) and the datacenter queue (system memory, I/O request) associated with the message request .
US20070168567A1
CLAIM 1
. A computer program product comprising a computer usable medium having a computer readable program , wherein the computer readable program , when executed on an input/output (I/O) adapter , causes the I/O adapter to : receive , from a queue of an application instance , in response to a file name based I/O request (queue cache, queue requests, datacenter queue) generated by the application instance , the file name based I/O request including one or more file name based work requests , a doorbell message identifying a number of work requests to be added to a processing queue of the I/O adapter ;
generate a processing queue entry in the processing queue of the I/O adapter based on the number of work requests to be added to the processing queue identified in the doorbell message ;
and process at least one work request in the application instance' ;
s queue based on the processing queue entry in the processing queue of the I/O adapter .

US20070168567A1
CLAIM 5
. The computer program product of claim 1 , wherein the processing queue entry includes a start address (message request) that identifies a system memory (queue cache, queue requests, datacenter queue) address of a first entry in the application instance' ;
s queue , an end address that identifies a system memory address of a last entry in the application instance' ;
s queue , a head address that identifies a system memory address of a next entry in the application instance' ;
s queue that is to be processed by the I/O adapter , and a processing queue count that identifies a number of entries in the application instance' ;
s queue that have not been processed by the I/O adapter .

US20070168567A1
CLAIM 13
. The computer program product of claim 12 , wherein the computer readable program causes the I/O adapter to process the queue entry if the file referenced by the queue entry is associated with the application instance by : performing a lookup operation , in a storage block address table data structure , based on the file extension protection table entry , to identify at least one storage block address table entry corresponding to the file extension protection table entry ;
and performing an I/O operation on a storage location (producer worker, consumer worker, producer worker information, consumer worker pairs, determine matching producer worker, determining matching producer worker) in a storage device referenced by a storage block address that is included in the file extension protection table entry .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20070005572A1

Filed: 2005-06-29     Issued: 2007-01-04

Architecture and system for host management

(Original Assignee) Intel Corp     (Current Assignee) Intel Corp

Travis Schluessler, Priya Rajagopal, Ray Steinberger, Tisson Mathew, Arun Preetham, Ravi Sahita, David Durham, Karanvir Grewal
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel (second buffer) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer (command channel) for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message requesting the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (message request) and the datacenter queue associated with the message request .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel (second buffer) associated with the datacenter queue .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer (command channel) for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message requesting the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (message request) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (message request) that includes matching the consumer worker to the other datacenter queue .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (second message) between the producer worker and the consumer worker .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message requesting the managed resource data in the first buffer and to retrieve a second message (second message) including the managed resource data from the second buffer .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (message request) .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel (second buffer) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer (command channel) for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (message request) that includes matching the consumer worker to the other datacenter queue .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (message request) and the datacenter queue associated with the message request .
US20070005572A1
CLAIM 6
. A medium according to claim 1 , the program code further comprising : code to provide a first buffer for messages from the provider module to the managed host ;
and code to provide a second buffer for messages from the managed host to the provider module , wherein the provider module is to retrieve the managed resource data from the memory location of the managed host by storing a first message request (message request) ing the managed resource data in the first buffer and to retrieve a second message including the managed resource data from the second buffer .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060184948A1

Filed: 2005-02-17     Issued: 2006-08-17

System, method and medium for providing asynchronous input and output with less system calls to and from an operating system

(Original Assignee) Red Hat Inc     (Current Assignee) Red Hat Inc

Alan Cox
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (one processor) prior to storing the message in the queue cache at the second server .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (one processor) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 7
. A computing device (computing device) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 8
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 9
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 10
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (one processor) associated with the message .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 11
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 12
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 13
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 14
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 15
. The computing device (computing device) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (one processor) and the consumer worker .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 16
. The computing device (computing device) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (one processor) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20060184948A1
CLAIM 21
. A computing device (computing device) using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (one processor) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (one processor) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (one processor) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20060184948A1
CLAIM 21
. A computing device using at least one software module for use in reducing the number of system calls from an application program to an operating system kernel , said computing device comprising : at least one memory area ;
and at least one processor (producer worker) that uses the at least one software module to (i) create a list of requests issued by an application program ;
(ii) associate an indicia with the list indicating whether the list contains a request ;
and (iii) add a new application program request to the list when the indicia indicates that the list includes a request .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20050071316A1

Filed: 2004-11-18     Issued: 2005-03-31

Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Ilan Caron
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (one location) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (second data) or more of : the consumer worker (one location) associated with the message request and the datacenter queue associated with the message request .
US20050071316A1
CLAIM 1
. One or more computer-readable media having stored thereon a data structure , comprising : a) a first data field containing at least one data item ;
b) a second data (identifying one) field containing data representing a location ;
c) a third data field containing data representing a count of the at least one data item ;
d) a fourth data field containing data representing at least one first instruction to manipulate the at least one data item ;
e) a fifth data field containing data representing at least one second instruction to serialize the data structure ;
and f) a sixth data field containing data representing at least one third instruction to deserialize the data structure .

US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine ;

and executing the consumer worker (one location) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (second instruction, third instruction) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker (one location) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20050071316A1
CLAIM 1
. One or more computer-readable media having stored thereon a data structure , comprising : a) a first data field containing at least one data item ;
b) a second data field containing data representing a location ;
c) a third data field containing data representing a count of the at least one data item ;
d) a fourth data field containing data representing at least one first instruction to manipulate the at least one data item ;
e) a fifth data field containing data representing at least one second instruction (store instructions) to serialize the data structure ;
and f) a sixth data field containing data representing at least one third instruction (store instructions) to deserialize the data structure .

US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (one location) executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (one location) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker (one location) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (one location) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker (one location) in response to the message request .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (one location) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (one location) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker (one location) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (second data) or more of : the consumer worker (one location) associated with the message request and the datacenter queue associated with the message request .
US20050071316A1
CLAIM 1
. One or more computer-readable media having stored thereon a data structure , comprising : a) a first data field containing at least one data item ;
b) a second data (identifying one) field containing data representing a location ;
c) a third data field containing data representing a count of the at least one data item ;
d) a fourth data field containing data representing at least one first instruction to manipulate the at least one data item ;
e) a fifth data field containing data representing at least one second instruction to serialize the data structure ;
and f) a sixth data field containing data representing at least one third instruction to deserialize the data structure .

US20050071316A1
CLAIM 7
. One or more computer-readable media containing executable instructions that , when implemented , perform a method comprising : a) receiving a self-descriptive object comprising at least one data item , data representing at least one location (consumer worker) of a type indicator of the at least one data item , a count of the at least one data item , at least one first instruction to manipulate the at least one data item , at least one second instruction to serialize the self-descriptive object , and at least one third instruction to deserialize the self-descriptive object ;
b) invoking the at least one third instruction to deserialize the self-descriptive object ;
c) sending the deserialized self-descriptive object to a recipient ;
and d) the recipient sending a query to the at least one address of the at least one type identifier of the at least one data item in response to discovering an unknown type identifier .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20050091239A1

Filed: 2004-10-26     Issued: 2005-04-28

Queue bank repository and method for sharing limited queue banks in memory

(Original Assignee) Unisys Corp     (Current Assignee) Unisys Corp

Wayne Ward, David Johnson, Charles Caldarale
US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application (more available e) is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application (more available e) is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application (more available e) is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application (more available e) is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application (more available e) is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application (more available e) is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application (more available e) is further configured to : update the queue user table based on the observed queue usage information .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application (more available e) is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application (more available e) is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application (more available e) is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
US20050091239A1
CLAIM 16
. The method of claim 15 wherein said operating system provides for more available e (VMM application) ntry address locations when it receives said interrupt .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20060036697A1

Filed: 2004-08-16     Issued: 2006-02-16

Email system and method thereof

(Original Assignee) Taiwan Semiconductor Manufacturing Co TSMC Ltd     (Current Assignee) Taiwan Semiconductor Manufacturing Co TSMC Ltd

Jun-Liang Lin, Chien-Chung Huang, Dah-Chung Chen, Teng-Hsiang Hsu, Shih-Wei Chen, Chih-Yang Wang
US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (predefined value) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20060036697A1
CLAIM 1
. An email system , comprising : a mail agent to receive at least one email message comprising at least a flag , assess the flag , and perform a predefined process including storing mail content of the email message to a database , generating a new email message comprising a link to the mail content of the email message in the database , and forwarding the new email message to a recipient of the email message if the flag is a predefined value (queue user table) .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (predefined value) based on the observed queue usage information .
US20060036697A1
CLAIM 1
. An email system , comprising : a mail agent to receive at least one email message comprising at least a flag , assess the flag , and perform a predefined process including storing mail content of the email message to a database , generating a new email message comprising a link to the mail content of the email message in the database , and forwarding the new email message to a recipient of the email message if the flag is a predefined value (queue user table) .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (predefined value) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20060036697A1
CLAIM 1
. An email system , comprising : a mail agent to receive at least one email message comprising at least a flag , assess the flag , and perform a predefined process including storing mail content of the email message to a database , generating a new email message comprising a link to the mail content of the email message in the database , and forwarding the new email message to a recipient of the email message if the flag is a predefined value (queue user table) .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (predefined value) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20060036697A1
CLAIM 1
. An email system , comprising : a mail agent to receive at least one email message comprising at least a flag , assess the flag , and perform a predefined process including storing mail content of the email message to a database , generating a new email message comprising a link to the mail content of the email message in the database , and forwarding the new email message to a recipient of the email message if the flag is a predefined value (queue user table) .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (predefined value) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20060036697A1
CLAIM 1
. An email system , comprising : a mail agent to receive at least one email message comprising at least a flag , assess the flag , and perform a predefined process including storing mail content of the email message to a database , generating a new email message comprising a link to the mail content of the email message in the database , and forwarding the new email message to a recipient of the email message if the flag is a predefined value (queue user table) .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN1508682A

Filed: 2003-12-16     Issued: 2004-06-30

任务调度的方法、系统和设备

(Original Assignee) 国际商业机器公司     

A・康杜, A·康杜
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (一个队列) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (这些请求) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (一个队列) ;

and modifying the message in response to receiving the signal .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (一个队列) , deleting the message from the datacenter queue .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (这些请求) and the datacenter queue (一个队列) associated with the message request .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US9479472B2
CLAIM 7
. A computing device to provide local processing (计算一个) of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (一个队列) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 6
. 如在权利要求1中所述的方法,其中所述得出分类值的步骤包括下列步骤:a . 根据交换的流量信息计算一个 (local processing) 效用值;b . 如果计算的效用值小于一个预先规定的值,假设一个补偿措施;以及c . 在考虑所假设的补偿措施后重新计算效用值。

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (一个队列) , delete the message from the first server .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (这些请求) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (一个队列) associated with the message request .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (调度装置) to identify the producer worker associated with the message .
CN1508682A
CLAIM 11
. 如在权利要求10中所述的系统,其中所述调度装置 (network connection) 还包括分发请求的装置。

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (调度装置) to detect the datacenter queue (一个队列) associated with the message .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 11
. 如在权利要求10中所述的系统,其中所述调度装置 (network connection) 还包括分发请求的装置。

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (一个队列) information associated with the producer worker , and datacenter queue information associated with the consumer worker .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (一个队列) , and identify a message request (这些请求) that includes matching the consumer worker to the other datacenter queue .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (这些请求) .
CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (一个队列) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (这些请求) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (一个队列) information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (一个队列) , and identify a message request (这些请求) that includes matching the consumer worker to the other datacenter queue .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (这些请求) and the datacenter queue (一个队列) associated with the message request .
CN1508682A
CLAIM 1
. 一种在一个多级计算系统内调度需处理的请求的方法,所述计算系统每个级都具有至少一个队列 (datacenter queue, datacenter queue information) ,每个队列都具有至少一个与之关联的处理功能,所述方法包括下列步骤:a . 将请求缓存在第一级的队列内;b . 与同第一级相邻的其他级交换流量信息;c . 得出请求的分类值;以及d . 根据所得出的值调度请求。

CN1508682A
CLAIM 27
. 如在权利要求26中所述的方法,其中所述确定门限效用值的步骤还包括:a . 识别网络需提供的QoS和SLA的要求;以及b . 确定分配给网络内这些请求 (message request) 的资源。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2004199678A

Filed: 2003-12-12     Issued: 2004-07-15

タスク・スケジューリングの方法、システム、およびプログラム製品

(Original Assignee) Internatl Business Mach Corp <Ibm>; インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Maschines Corporation     

Ashish Kundu, アシシュ・クンドゥ
US9479472B2
CLAIM 1
. A method to locally process queue requests (の要求) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
JP2004199678A
CLAIM 8
訂正処置を仮定する前記ステップが、適当なキューが要求を受け入れることができない場合に、前記適当なキューの要求 (queue requests) 処理容量の増加を仮定するステップを含む、請求項6に記載の方法。

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (の要求) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
JP2004199678A
CLAIM 8
訂正処置を仮定する前記ステップが、適当なキューが要求を受け入れることができない場合に、前記適当なキューの要求 (queue requests) 処理容量の増加を仮定するステップを含む、請求項6に記載の方法。

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (ヘッダ) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
JP2004199678A
CLAIM 28
前記ユーティリティ値を得る前記ステップが、 a.要求ヘッダ (queue user table) から前記要求に関係するQoSおよびSLAを識別するステップと、 b.次レベル・バランサで前記要求に対応する割り振られるリソースを判定するステップと、 c.前記ユーティリティ値を得るために前記識別された情報および前記判定された情報を処理するステップと を含む、請求項26に記載の方法。

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (ヘッダ) based on the observed queue usage information .
JP2004199678A
CLAIM 28
前記ユーティリティ値を得る前記ステップが、 a.要求ヘッダ (queue user table) から前記要求に関係するQoSおよびSLAを識別するステップと、 b.次レベル・バランサで前記要求に対応する割り振られるリソースを判定するステップと、 c.前記ユーティリティ値を得るために前記識別された情報および前記判定された情報を処理するステップと を含む、請求項26に記載の方法。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (ヘッダ) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
JP2004199678A
CLAIM 28
前記ユーティリティ値を得る前記ステップが、 a.要求ヘッダ (queue user table) から前記要求に関係するQoSおよびSLAを識別するステップと、 b.次レベル・バランサで前記要求に対応する割り振られるリソースを判定するステップと、 c.前記ユーティリティ値を得るために前記識別された情報および前記判定された情報を処理するステップと を含む、請求項26に記載の方法。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (の要求) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
JP2004199678A
CLAIM 8
訂正処置を仮定する前記ステップが、適当なキューが要求を受け入れることができない場合に、前記適当なキューの要求 (queue requests) 処理容量の増加を仮定するステップを含む、請求項6に記載の方法。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (ヘッダ) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
JP2004199678A
CLAIM 28
前記ユーティリティ値を得る前記ステップが、 a.要求ヘッダ (queue user table) から前記要求に関係するQoSおよびSLAを識別するステップと、 b.次レベル・バランサで前記要求に対応する割り振られるリソースを判定するステップと、 c.前記ユーティリティ値を得るために前記識別された情報および前記判定された情報を処理するステップと を含む、請求項26に記載の方法。

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (ヘッダ) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
JP2004199678A
CLAIM 28
前記ユーティリティ値を得る前記ステップが、 a.要求ヘッダ (queue user table) から前記要求に関係するQoSおよびSLAを識別するステップと、 b.次レベル・バランサで前記要求に対応する割り振られるリソースを判定するステップと、 c.前記ユーティリティ値を得るために前記識別された情報および前記判定された情報を処理するステップと を含む、請求項26に記載の方法。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP1432188A1

Filed: 2003-11-21     Issued: 2004-06-23

Email client and email facsimile machine

(Original Assignee) Samsung Electronics Co Ltd     (Current Assignee) Samsung Electronics Co Ltd

Young-Hoon Kim
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (email client) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
EP1432188A1
CLAIM 1
An email client (second server) for receiving email messages comprising a plurality of headers , the client being characterised by processing means (40 , 42 , 50 , 52) configured for detecting a processing instruction header among a received email' ;
s headers and responding to detection of a processing instruction header by operating in dependence on the content of the processing instruction header .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (delete command) from the datacenter queue , deleting the message from the datacenter queue .
EP1432188A1
CLAIM 5
A client according to claim 3 or 4 , wherein the processing means (40 , 42 , 50 , 52) is configured to respond to a predetermined processing instruction by sending a message delete command (delete command) to the server (70) from which the email was downloaded .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (email client) .
EP1432188A1
CLAIM 1
An email client (second server) for receiving email messages comprising a plurality of headers , the client being characterised by processing means (40 , 42 , 50 , 52) configured for detecting a processing instruction header among a received email' ;
s headers and responding to detection of a processing instruction header by operating in dependence on the content of the processing instruction header .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (email client) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
EP1432188A1
CLAIM 1
An email client (second server) for receiving email messages comprising a plurality of headers , the client being characterised by processing means (40 , 42 , 50 , 52) configured for detecting a processing instruction header among a received email' ;
s headers and responding to detection of a processing instruction header by operating in dependence on the content of the processing instruction header .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (delete command) from the datacenter queue , delete the message from the first server .
EP1432188A1
CLAIM 5
A client according to claim 3 or 4 , wherein the processing means (40 , 42 , 50 , 52) is configured to respond to a predetermined processing instruction by sending a message delete command (delete command) to the server (70) from which the email was downloaded .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (email messages) between the producer worker and the consumer worker .
EP1432188A1
CLAIM 1
An email client for receiving email messages (second message) comprising a plurality of headers , the client being characterised by processing means (40 , 42 , 50 , 52) configured for detecting a processing instruction header among a received email' ;
s headers and responding to detection of a processing instruction header by operating in dependence on the content of the processing instruction header .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (email client) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
EP1432188A1
CLAIM 1
An email client (second server) for receiving email messages comprising a plurality of headers , the client being characterised by processing means (40 , 42 , 50 , 52) configured for detecting a processing instruction header among a received email' ;
s headers and responding to detection of a processing instruction header by operating in dependence on the content of the processing instruction header .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US7337214B2

Filed: 2003-09-26     Issued: 2008-02-26

Caching, clustering and aggregating server

(Original Assignee) YHC Corp     (Current Assignee) YHC Corp

Michael Douglass, Douglas Swarin, Edward Henigin, Jonah Yokubaitis
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (second server) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (storage units) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US7337214B2
CLAIM 2
. The server system of claim 1 , wherein a second server (second server) of the cluster of servers is retrieves the first requested article from the at least one of the servers in the cluster of servers when the customer requested article has already been requested from the backend servers due to a previous customer request for the first requested article .

US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (storage units) associated with the message request and the datacenter queue associated with the message request .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (second server) .
US7337214B2
CLAIM 2
. The server system of claim 1 , wherein a second server (second server) of the cluster of servers is retrieves the first requested article from the at least one of the servers in the cluster of servers when the customer requested article has already been requested from the backend servers due to a previous customer request for the first requested article .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine ;

and executing the consumer worker (storage units) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (second server) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker (storage units) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US7337214B2
CLAIM 2
. The server system of claim 1 , wherein a second server (second server) of the cluster of servers is retrieves the first requested article from the at least one of the servers in the cluster of servers when the customer requested article has already been requested from the backend servers due to a previous customer request for the first requested article .

US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (storage units) executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (network connection) to identify the producer worker associated with the message .
US7337214B2
CLAIM 8
. The server system of claim 1 , wherein each the server in the cluster of servers is adapted to be in communication with the other servers in the cluster of servers via a network connection (network connection) .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (network connection) to detect the datacenter queue associated with the message .
US7337214B2
CLAIM 8
. The server system of claim 1 , wherein each the server in the cluster of servers is adapted to be in communication with the other servers in the cluster of servers via a network connection (network connection) .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (storage units) , consumer worker (storage units) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker (storage units) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (storage units) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker (storage units) in response to the message request .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (second server) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (storage units) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US7337214B2
CLAIM 2
. The server system of claim 1 , wherein a second server (second server) of the cluster of servers is retrieves the first requested article from the at least one of the servers in the cluster of servers when the customer requested article has already been requested from the backend servers due to a previous customer request for the first requested article .

US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (storage units) , consumer worker (storage units) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker (storage units) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (storage units) associated with the message request and the datacenter queue associated with the message request .
US7337214B2
CLAIM 4
. The server system of claim 1 , wherein : the retrieved articles stored in the at least one server in the cluster of servers are stored in a memory device divided into smaller sized data storage units (consumer worker, producer worker information, consumer worker information, consumer worker pairs) ;
and each data storage unit is dynamically assigned a time interval such that only articles originally posted within the dynamically assigned time interval are stored in each the storage unit .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040107259A1

Filed: 2003-07-11     Issued: 2004-06-03

Routing of electronic messages using a routing map and a stateful script engine

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

Andrew Wallace, Christopher Ambler
US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (second data) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20040107259A1
CLAIM 19
. A method as defined in claim 17 , wherein the routing map comprises a plurality of entries , wherein each of the entries includes : a first data field containing an operation identifier that uniquely identifies the particular entry ;
a second data (identifying one) field containing data representing one of the series of operations ;
and a third data field containing an argument , wherein : if said one of the series of operations is to be performed by the executable script , the argument is passed to the executable script when the routing map is executed ;
and if said one of the series of operations is to be performed by the routing engine , the argument is passed to the routing engine when the routing map is executed .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (second client) through a network connection to identify the producer worker associated with the message .
US20040107259A1
CLAIM 15
. A method as defined in claim 14 , further comprising the step of distributing a first electronic message through the defined route according to a hub and spoke model , wherein the server system represents the hub and the communication links represent spokes , the step of distributing the first electronic message comprising the steps of : transmitting the first electronic message from the server system to a first client of the one or more clients without sending the routing map or the executable script to the first client ;
receiving at the server system a response from the first client to the electronic message ;
and transmitting the first electronic message from the server system to a second client (network traffic) of the one or more clients without sending the routing map or the executable script to the second client .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (second client) through a network connection to detect the datacenter queue associated with the message .
US20040107259A1
CLAIM 15
. A method as defined in claim 14 , further comprising the step of distributing a first electronic message through the defined route according to a hub and spoke model , wherein the server system represents the hub and the communication links represent spokes , the step of distributing the first electronic message comprising the steps of : transmitting the first electronic message from the server system to a first client of the one or more clients without sending the routing map or the executable script to the first client ;
receiving at the server system a response from the first client to the electronic message ;
and transmitting the first electronic message from the server system to a second client (network traffic) of the one or more clients without sending the routing map or the executable script to the second client .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (second data) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20040107259A1
CLAIM 19
. A method as defined in claim 17 , wherein the routing map comprises a plurality of entries , wherein each of the entries includes : a first data field containing an operation identifier that uniquely identifies the particular entry ;
a second data (identifying one) field containing data representing one of the series of operations ;
and a third data field containing an argument , wherein : if said one of the series of operations is to be performed by the executable script , the argument is passed to the executable script when the routing map is executed ;
and if said one of the series of operations is to be performed by the routing engine , the argument is passed to the routing engine when the routing map is executed .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20050015763A1

Filed: 2003-07-01     Issued: 2005-01-20

Method and system for maintaining consistency during multi-threaded processing of LDIF data

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

William Alexander, Kean Kuiper, Christopher Richardson
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (consecutive manner) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (consecutive manner) associated with the message request and the datacenter queue associated with the message request .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine ;

and executing the consumer worker (consecutive manner) on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker (consecutive manner) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (consecutive manner) executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (loading data) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (consecutive manner) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20050015763A1
CLAIM 1
. A method for loading data (queue user table) into a directory , the method comprising : obtaining LDIF (Lightweight Directory Access Protocol (LDAP) Data Interchange Format) entries ;
associating a priority value with each entry ;
and adding the entries into an LDAP directory in accordance with their associated priority values using multiple loading threads .

US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (loading data) based on the observed queue usage information .
US20050015763A1
CLAIM 1
. A method for loading data (queue user table) into a directory , the method comprising : obtaining LDIF (Lightweight Directory Access Protocol (LDAP) Data Interchange Format) entries ;
associating a priority value with each entry ;
and adding the entries into an LDAP directory in accordance with their associated priority values using multiple loading threads .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker (consecutive manner) pairs through use of the queue user table (loading data) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20050015763A1
CLAIM 1
. A method for loading data (queue user table) into a directory , the method comprising : obtaining LDIF (Lightweight Directory Access Protocol (LDAP) Data Interchange Format) entries ;
associating a priority value with each entry ;
and adding the entries into an LDAP directory in accordance with their associated priority values using multiple loading threads .

US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (consecutive manner) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker (consecutive manner) in response to the message request .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (consecutive manner) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (loading data) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker (consecutive manner) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20050015763A1
CLAIM 1
. A method for loading data (queue user table) into a directory , the method comprising : obtaining LDIF (Lightweight Directory Access Protocol (LDAP) Data Interchange Format) entries ;
associating a priority value with each entry ;
and adding the entries into an LDAP directory in accordance with their associated priority values using multiple loading threads .

US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker (consecutive manner) pairs through use of the queue user table (loading data) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20050015763A1
CLAIM 1
. A method for loading data (queue user table) into a directory , the method comprising : obtaining LDIF (Lightweight Directory Access Protocol (LDAP) Data Interchange Format) entries ;
associating a priority value with each entry ;
and adding the entries into an LDAP directory in accordance with their associated priority values using multiple loading threads .

US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (consecutive manner) associated with the message request and the datacenter queue associated with the message request .
US20050015763A1
CLAIM 4
. The method of claim 1 further comprising : obtaining the entries in a consecutive manner (consumer worker, consumer worker pairs) .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040252709A1

Filed: 2003-06-11     Issued: 2004-12-16

System having a plurality of threads being allocatable to a send or receive queue

(Original Assignee) Hewlett Packard Development Co LP     (Current Assignee) Hewlett Packard Development Co LP

Samuel Fineberg
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache (memory accesses) at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20040252709A1
CLAIM 8
. The computer system of claim 1 wherein said server processes client requests that comprise direct memory accesses (queue cache) to or from a device external to said server .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache (memory accesses) at the second server .
US20040252709A1
CLAIM 8
. The computer system of claim 1 wherein said server processes client requests that comprise direct memory accesses (queue cache) to or from a device external to said server .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (remote direct memory access) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache (memory accesses) at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20040252709A1
CLAIM 8
. The computer system of claim 1 wherein said server processes client requests that comprise direct memory accesses (queue cache) to or from a device external to said server .

US20040252709A1
CLAIM 14
. The server of claim 10 wherein the server sends data to or receives data from a client using remote direct memory access (store instructions) .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache (memory accesses) ;

and provide the intercepted message to the consumer worker in response to the message request .
US20040252709A1
CLAIM 8
. The computer system of claim 1 wherein said server processes client requests that comprise direct memory accesses (queue cache) to or from a device external to said server .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache (memory accesses) at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20040252709A1
CLAIM 8
. The computer system of claim 1 wherein said server processes client requests that comprise direct memory accesses (queue cache) to or from a device external to said server .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040215847A1

Filed: 2003-04-25     Issued: 2004-10-28

Autonomic I/O adapter response performance optimization using polling

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Shelly Dirstine, Naresh Nayar, Gregory Nordstrom
US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (I/O device) from the datacenter queue , deleting the message from the datacenter queue .
US20040215847A1
CLAIM 14
. An input/output (I/O) processing system , comprising : a processor complex having one or more processors ;
a plurality of I/O processors , each coupled with one or more I/O device (readable storage, delete command) s ;
at least one upstream queue corresponding to each I/O processor ;
a set of queue pointers comprising , for each upstream queue , a first pointer indicative of a location of a most recent message posted to the upstream queue by a corresponding I/O processor and a second pointer indicative of a location of a most recent message processed by the processor complex ;
and an executable software component configured to poll the set of pointers to determine , for each upstream queue , the number of messages that have been posted to the upstream queue by the corresponding I/O processor since the most recent message was processed by the processor complex , based on relative values of the corresponding first and second pointers .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (software component) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20040215847A1
CLAIM 14
. An input/output (I/O) processing system , comprising : a processor complex having one or more processors ;
a plurality of I/O processors , each coupled with one or more I/O devices ;
at least one upstream queue corresponding to each I/O processor ;
a set of queue pointers comprising , for each upstream queue , a first pointer indicative of a location of a most recent message posted to the upstream queue by a corresponding I/O processor and a second pointer indicative of a location of a most recent message processed by the processor complex ;
and an executable software component (store instructions) configured to poll the set of pointers to determine , for each upstream queue , the number of messages that have been posted to the upstream queue by the corresponding I/O processor since the most recent message was processed by the processor complex , based on relative values of the corresponding first and second pointers .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (I/O device) from the datacenter queue , delete the message from the first server .
US20040215847A1
CLAIM 14
. An input/output (I/O) processing system , comprising : a processor complex having one or more processors ;
a plurality of I/O processors , each coupled with one or more I/O device (readable storage, delete command) s ;
at least one upstream queue corresponding to each I/O processor ;
a set of queue pointers comprising , for each upstream queue , a first pointer indicative of a location of a most recent message posted to the upstream queue by a corresponding I/O processor and a second pointer indicative of a location of a most recent message processed by the processor complex ;
and an executable software component configured to poll the set of pointers to determine , for each upstream queue , the number of messages that have been posted to the upstream queue by the corresponding I/O processor since the most recent message was processed by the processor complex , based on relative values of the corresponding first and second pointers .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP1474746A1

Filed: 2003-02-14     Issued: 2004-11-10

Management of message queues

(Original Assignee) Proquent Systems Corp     (Current Assignee) Proquent Systems Corp

Thomas E. Hamilton, Kevin Kicklighter, Charles R. Davis
US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (predetermined criterion, identifying one) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
EP1474746A1
CLAIM 9
. A method of managing messages , comprising : providing an application programming interface (API) to allow a producer module to send a message to a macro queue that manages a plurality of queues , the API sending the message to the macro queue without identifying one (identifying one) of the plurality of queues .

EP1474746A1
CLAIM 15
. A method comprising : keeping a list of queue pointers , each pointer pointing to one of a plurality of queues ;
receiving a request for adding a queue element ;
and servicing the request by selecting one or more queue pointers from the list based on a predetermined criterion (identifying one) and adding the queue element to the one or more queues that the selected one or more queue pointers are pointing to .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (second message) between the producer worker and the consumer worker .
EP1474746A1
CLAIM 25
. A method for passing messages between processes in a distributed system comprising : providing an application programming interface to processes hosted on computers of the distributed system ;
passing a first message from a first process to a second process hosted on one computer of the distributed system , including passing said message through a shared memory accessible to both the first process and the second process ;
and passing a second message (second message) from the first process to a third process hosted on a second computer of the distributed system , including passing said message over a communication channel coupling the first and the second computers .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (predetermined criterion, identifying one) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
EP1474746A1
CLAIM 9
. A method of managing messages , comprising : providing an application programming interface (API) to allow a producer module to send a message to a macro queue that manages a plurality of queues , the API sending the message to the macro queue without identifying one (identifying one) of the plurality of queues .

EP1474746A1
CLAIM 15
. A method comprising : keeping a list of queue pointers , each pointer pointing to one of a plurality of queues ;
receiving a request for adding a queue element ;
and servicing the request by selecting one or more queue pointers from the list based on a predetermined criterion (identifying one) and adding the queue element to the one or more queues that the selected one or more queue pointers are pointing to .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040117794A1

Filed: 2002-12-17     Issued: 2004-06-17

Method, system and framework for task scheduling

(Original Assignee) International Business Machines Corp     (Current Assignee) International Business Machines Corp

Ashish Kundu
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (load balancing) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server, network traffic) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server (load balancing) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server, network traffic) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server (load balancing) .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server, network traffic) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (load balancing) through a network connection to identify the producer worker associated with the message .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server, network traffic) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (load balancing) through a network connection to detect the datacenter queue associated with the message .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server, network traffic) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (predefined value) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20040117794A1
CLAIM 5
. The method as recited in claim 1 further comprising the step of processing the request at the first level if the obtained classification value is greater than a predefined value (queue user table) .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (predefined value) based on the observed queue usage information .
US20040117794A1
CLAIM 5
. The method as recited in claim 1 further comprising the step of processing the request at the first level if the obtained classification value is greater than a predefined value (queue user table) .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (predefined value) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20040117794A1
CLAIM 5
. The method as recited in claim 1 further comprising the step of processing the request at the first level if the obtained classification value is greater than a predefined value (queue user table) .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server (load balancing) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20040117794A1
CLAIM 19
. An apparatus suitable for load balancing (first server, network traffic) in a computing system , the apparatus comprising : a . a plurality of queues comprising : i . a first level queue for buffering incoming requests ;
ii . a plurality of subsequent level queues , each subsequent level of queue corresponding to a class of incoming request ;
b . means for classifying the requests into a plurality of subsequent level queues based on user defined parameters ;
c . means for exchanging information amongst the plurality of levels of queues ;
and d . means for dispatching the requests from the queues to at least one of the queues or target components .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (predefined value) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20040117794A1
CLAIM 5
. The method as recited in claim 1 further comprising the step of processing the request at the first level if the obtained classification value is greater than a predefined value (queue user table) .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (predefined value) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
US20040117794A1
CLAIM 5
. The method as recited in claim 1 further comprising the step of processing the request at the first level if the obtained classification value is greater than a predefined value (queue user table) .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040107240A1

Filed: 2002-12-02     Issued: 2004-06-03

Method and system for intertask messaging between multiple processors

(Original Assignee) Conexant Inc     (Current Assignee) Conexant Inc ; Brooktree Broadband Holding Inc

Boris Zabarski, Dorit Pardo, Yaacov Ben-Simon
US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (associated process) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20040107240A1
CLAIM 40
. The system as in claim 34 , each of a subset of the plurality of processors further comprises : a second message queue ;
a second task operably connected to the second message queue ;
and wherein each mediator task of a processor is adapted to : store at least one message from a first task of an associated process (identifying one) or in the corresponding mediator message queue , the at least one message being intended for the second task of the processor ;
and transfer the at least one message from the corresponding mediator message queue to the second message queue of the processor during an execution of the mediator task by the processor .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (second message) between the producer worker and the consumer worker .
US20040107240A1
CLAIM 40
. The system as in claim 34 , each of a subset of the plurality of processors further comprises : a second message (second message) queue ;
a second task operably connected to the second message queue ;
and wherein each mediator task of a processor is adapted to : store at least one message from a first task of an associated processor in the corresponding mediator message queue , the at least one message being intended for the second task of the processor ;
and transfer the at least one message from the corresponding mediator message queue to the second message queue of the processor during an execution of the mediator task by the processor .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (associated process) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20040107240A1
CLAIM 40
. The system as in claim 34 , each of a subset of the plurality of processors further comprises : a second message queue ;
a second task operably connected to the second message queue ;
and wherein each mediator task of a processor is adapted to : store at least one message from a first task of an associated process (identifying one) or in the corresponding mediator message queue , the at least one message being intended for the second task of the processor ;
and transfer the at least one message from the corresponding mediator message queue to the second message queue of the processor during an execution of the mediator task by the processor .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20030028607A1

Filed: 2002-09-16     Issued: 2003-02-06

Methods and systems to manage and track the states of electronic media

(Original Assignee) Graham Miller; Michael Hanson; Brian Axe; Evans Steven Richard     (Current Assignee) METRICSTREAM Inc

Graham Miller, Michael Hanson, Brian Axe, Steven Evans
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (client terminals) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client terminals) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20030028607A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (client terminals) .
US20030028607A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server (client terminals) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client terminals) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20030028607A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server (client terminals) .
US20030028607A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (client terminals) to identify the producer worker associated with the message .
US20030028607A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (client terminals) to detect the datacenter queue associated with the message .
US20030028607A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (time stamp) , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20030028607A1
CLAIM 11
. The method of claim 1 further comprising setting a time stamp (producer worker information) to indicate when the notification is sent .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server (client terminals) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client terminals) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20030028607A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (time stamp) , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20030028607A1
CLAIM 11
. The method of claim 1 further comprising setting a time stamp (producer worker information) to indicate when the notification is sent .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20030014551A1

Filed: 2002-08-21     Issued: 2003-01-16

Framework system

(Original Assignee) Future System Consulting Corp     (Current Assignee) Future Architect Inc

Kunihito Ishibashi, Mitsuru Maeshima, Narihiro Okumura, Isao Sakashita
US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (monitoring operation) from the datacenter queue , deleting the message from the datacenter queue .
US20030014551A1
CLAIM 11
. A framework system according to claim 10 wherein the messaging services are respectively capable of monitoring operation (delete command) of one or more other messaging services ;
and , in the event that normal operation of at least one other messaging service fails to be detected , one or more messages is or are relayed to one or more normally operating other messaging services instead of to the at least one other messaging service .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (ring buffer) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20030014551A1
CLAIM 8
. A framework system according to claim 7 wherein at least one of the messaging service or services has one or more ring buffer (store instructions) s , one or more of which is or are capable of temporarily delaying at least one of the one or more P to M messages .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (monitoring operation) from the datacenter queue , delete the message from the first server .
US20030014551A1
CLAIM 11
. A framework system according to claim 10 wherein the messaging services are respectively capable of monitoring operation (delete command) of one or more other messaging services ;
and , in the event that normal operation of at least one other messaging service fails to be detected , one or more messages is or are relayed to one or more normally operating other messaging services instead of to the at least one other messaging service .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (queue management) between the producer worker and the consumer worker .
US20030014551A1
CLAIM 16
. A framework system connected so as to be capable of communication with one or more clients , said system comprising : one or more framework services , one or more of which is or are capable of processing one or more request messages from at least one of the client or clients and of outputting one or more reply messages to at least one of the client or clients ;
and one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
the request message or messages being prioritized in a particular fashion ;
at least one of the messaging service or services comprising one or more message queues capable of temporarily delaying at least one of the request message or messages and one or more queue management (second message) components capable of managing input and/or output of at least one of the message queue or queues ;
and at least one of the queue management component or components being provided with a prioritized mode by which , at one or more times when a plurality of messages have been stored in one or more of the message queue or queues , the order or orders in which the plurality of messages are output from the message queue or queues is or are controlled in correspondence to the respective priority or priorities of the respective message or messages , and with a sequence protection mode by which , at one or more times when a plurality of messages have been stored in one or more of the message queue or queues , retrieval of one or more other messages stored in the message queue or queues is prohibited until completion of processing at one or more of the framework service or services of at least one message previously retrieved from the message queue or queues .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20030055668A1

Filed: 2002-08-08     Issued: 2003-03-20

Workflow engine for automating business processes in scalable multiprocessor computer platforms

(Original Assignee) TriVium Systems Inc     (Current Assignee) TriVium Systems Inc

Amitabh Saran, Sanjay Suri, Purushottaman Balakrishnan, Shashidhar Kamath
US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (value pair) from the datacenter queue , deleting the message from the datacenter queue .
US20030055668A1
CLAIM 22
. A method of designing a workflow according to claim 20 wherein the incoming trigger includes a set of incoming value pair (delete command) s and the portion of the incoming trigger mapped to the first input is a first value pair of the set of incoming value pairs .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (value pair) from the datacenter queue , delete the message from the first server .
US20030055668A1
CLAIM 22
. A method of designing a workflow according to claim 20 wherein the incoming trigger includes a set of incoming value pair (delete command) s and the portion of the incoming trigger mapped to the first input is a first value pair of the set of incoming value pairs .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information (third data set) associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20030055668A1
CLAIM 2
. A system for executing a workflow according to claim 1 wherein , the workflow engine is operable to transmit a third message including a third header and third data set (datacenter queue information) to a second object based on a second requirement of the predetermined finite state machine , the identity of the second object determined based on the second data set , the second object having a second function , and operable to receive the third message from the workflow engine , execute the second function based on the third data set , generate a fourth message including a fourth header and a fourth data set , and transmit the fourth message to the workflow engine .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (second messages) between the producer worker and the consumer worker .
US20030055668A1
CLAIM 1
. A system for executing a workflow comprising : a workflow engine operable to receive an input message having a characteristic and data , the workflow engine operable to implement a predetermined finite state machine , based on the characteristic of the input message ;
the workflow engine operable to transmit a first message including a first header and a first data set to a first object based on a first requirement of the predetermined finite state machine , and receive a second message from the first object , the first object having a first function , and operable to receive the first message from the workflow engine , execute the first function based on the first data set , generate a second message including a second header and second data set representing a result of the executed first function , and transmit the second message to the workflow engine ;
a message platform operable to transfer first and second messages (second message) between the first object and the workflow engine .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information (third data set) associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20030055668A1
CLAIM 2
. A system for executing a workflow according to claim 1 wherein , the workflow engine is operable to transmit a third message including a third header and third data set (datacenter queue information) to a second object based on a second requirement of the predetermined finite state machine , the identity of the second object determined based on the second data set , the second object having a second function , and operable to receive the third message from the workflow engine , execute the second function based on the third data set , generate a fourth message including a fourth header and a fourth data set , and transmit the fourth message to the workflow engine .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20030097457A1

Filed: 2002-08-08     Issued: 2003-05-22

Scalable multiprocessor architecture for business computer platforms

(Original Assignee) Amitabh Saran; Mathews Manaloor; Arun Maheshwari; Sanjay Suri; Tarak Goradia     

Amitabh Saran, Mathews Manaloor, Arun Maheshwari, Sanjay Suri, Tarak Goradia
US9479472B2
CLAIM 1
. A method to locally process queue requests (exchanging messages) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20030097457A1
CLAIM 2
. A scalable software architecture according to claim 1 and further comprising a message interface coupled to the messaging platform for exchanging messages (queue requests) between the messaging platform and a third-party application .

US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (message request) and the datacenter queue associated with the message request .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (exchanging messages) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20030097457A1
CLAIM 2
. A scalable software architecture according to claim 1 and further comprising a message interface coupled to the messaging platform for exchanging messages (queue requests) between the messaging platform and a third-party application .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (message request) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (message request) that includes matching the consumer worker to the other datacenter queue .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (connected components) between the producer worker and the consumer worker .
US20030097457A1
CLAIM 8
. A scalable software architecture according to claim 1 wherein the messaging platform provides registration of connected components (second message) to implement request-reply transactions .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (message request) .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (exchanging messages) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (message request) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20030097457A1
CLAIM 2
. A scalable software architecture according to claim 1 and further comprising a message interface coupled to the messaging platform for exchanging messages (queue requests) between the messaging platform and a third-party application .

US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (message request) that includes matching the consumer worker to the other datacenter queue .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (message request) and the datacenter queue associated with the message request .
US20030097457A1
CLAIM 12
. The software messaging platform of claim 11 wherein the MPM process is arranged to invoke methods for assigning a port on the messaging platform in response to a message request (message request) ing a connection to the bus ;
and maintaining a table of current message platform connection data .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20040019643A1

Filed: 2002-07-23     Issued: 2004-01-29

Remote command server

(Original Assignee) Canon Inc     (Current Assignee) Canon Inc

Robert Zirnstein
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (predetermined location) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (email address data) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (email address data) and the datacenter queue associated with the message request .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (predetermined location) prior to storing the message in the queue cache at the second server .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (predetermined location) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (predetermined location) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (email address data) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (wireless telephone) through a network connection to identify the producer worker (predetermined location) associated with the message .
US20040019643A1
CLAIM 8
. A method according to claim 1 , wherein the second computing device is a wireless telephone (network traffic) having an electronic message application .

US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (wireless telephone) through a network connection to detect the datacenter queue associated with the message .
US20040019643A1
CLAIM 8
. A method according to claim 1 , wherein the second computing device is a wireless telephone (network traffic) having an electronic message application .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (predetermined location) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (predetermined location) and consumer worker pairs (body portion) through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (email address data) that includes matching the consumer worker to the other datacenter queue .
US20040019643A1
CLAIM 1
. A method for providing control of a first computing device from a second computing device , the method comprising the steps of : accessing an electronic message received by an electronic message application in the first computing device from the second computing device ;
extracting a command from the received electronic message ;
selecting from a plurality of function calls at least one function call corresponding to the extracted command ;
initiating execution of the at least one function call ;
obtaining output data (determine matching producer worker) from each executed function call ;
composing an output electronic message for each executed function call , said output electronic message being directed to a specific address and containing the output data from the executed function call ;
and sending each output electronic message via the electronic message application to the specific address .

US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 24
. A method according to claim 21 , wherein the command is present within a body portion (consumer worker pairs) of the electronic message .

US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs (body portion) , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (predetermined location) and the consumer worker .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 24
. A method according to claim 21 , wherein the command is present within a body portion (consumer worker pairs) of the electronic message .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (predetermined location) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (email address data) .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (predetermined location) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (email address data) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (predetermined location) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (predetermined location) and consumer worker pairs (body portion) through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (email address data) that includes matching the consumer worker to the other datacenter queue .
US20040019643A1
CLAIM 21
. A method according to claim 1 , wherein the received electronic message contains a command indicator at a predetermined location (producer worker) of the electronic message to indicate that a command is present within the electronic message .

US20040019643A1
CLAIM 24
. A method according to claim 21 , wherein the command is present within a body portion (consumer worker pairs) of the electronic message .

US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (email address data) and the datacenter queue associated with the message request .
US20040019643A1
CLAIM 30
. A method according to claim 28 , wherein , in the case that the e-mail address of the received electronic message is not included in the email address data (message request) base , a command is not extracted and a corresponding function call is not executed , and the output electronic message contains text indicating that access to the first computing device is denied .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN1437146A

Filed: 2002-02-05     Issued: 2003-08-20

撰写、浏览、答复、转发电子邮件的方法和电子邮件客户机

(Original Assignee) 国际商业机器公司     

叶天正, 杨力平, 张雷
US9479472B2
CLAIM 1
. A method to locally process queue requests (包含的) from co-located workers (电子邮件系统) in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
CN1437146A
CLAIM 1
. 用于在电子邮件系统 (co-located workers) 中撰写新邮件的方法,包括如下步骤:用户撰写一个新邮件;生成一个Global-ID,并将该Global-ID指定给该邮件;发送并保存该邮件。

CN1437146A
CLAIM 8
. 用于在电子邮件系统中浏览一个邮件的方法,该邮件包含有一个Global-ID和一个Reply-to-ID,该方法包括如下步骤:用户打开并浏览所述邮件;将该邮件中包含的 (queue requests) 内容呈现给用户;取出该邮件的Reply-to-ID;判断取出的Reply-to-ID是否为空;在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件;将找到的邮件的内容包括在被浏览的邮件中,呈现给用户;取出找到的邮件的Reply-to-ID;重复判断、查找、包含和取出找到邮件的Reply-to-ID的步骤,直到取出的Reply-to-ID为空或无法在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件。

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (包含的) from co-located workers (电子邮件系统) , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN1437146A
CLAIM 1
. 用于在电子邮件系统 (co-located workers) 中撰写新邮件的方法,包括如下步骤:用户撰写一个新邮件;生成一个Global-ID,并将该Global-ID指定给该邮件;发送并保存该邮件。

CN1437146A
CLAIM 8
. 用于在电子邮件系统中浏览一个邮件的方法,该邮件包含有一个Global-ID和一个Reply-to-ID,该方法包括如下步骤:用户打开并浏览所述邮件;将该邮件中包含的 (queue requests) 内容呈现给用户;取出该邮件的Reply-to-ID;判断取出的Reply-to-ID是否为空;在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件;将找到的邮件的内容包括在被浏览的邮件中,呈现给用户;取出找到的邮件的Reply-to-ID;重复判断、查找、包含和取出找到邮件的Reply-to-ID的步骤,直到取出的Reply-to-ID为空或无法在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (包含的) from co-located workers (电子邮件系统) in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN1437146A
CLAIM 1
. 用于在电子邮件系统 (co-located workers) 中撰写新邮件的方法,包括如下步骤:用户撰写一个新邮件;生成一个Global-ID,并将该Global-ID指定给该邮件;发送并保存该邮件。

CN1437146A
CLAIM 8
. 用于在电子邮件系统中浏览一个邮件的方法,该邮件包含有一个Global-ID和一个Reply-to-ID,该方法包括如下步骤:用户打开并浏览所述邮件;将该邮件中包含的 (queue requests) 内容呈现给用户;取出该邮件的Reply-to-ID;判断取出的Reply-to-ID是否为空;在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件;将找到的邮件的内容包括在被浏览的邮件中,呈现给用户;取出找到的邮件的Reply-to-ID;重复判断、查找、包含和取出找到邮件的Reply-to-ID的步骤,直到取出的Reply-to-ID为空或无法在保存的邮件中查找Global-ID对应该取出的Reply-to-ID的邮件。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20030135618A1

Filed: 2002-01-17     Issued: 2003-07-17

Computer network for providing services and a method of providing services with a computer network

(Original Assignee) Hewlett Packard Co     (Current Assignee) Hewlett Packard Development Co LP

Ravikumar Pisupati
US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one (computing resources) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20030135618A1
CLAIM 1
. A computer network for providing services comprising : a plurality of computing elements each of which comprises computing resources (identifying one) for supporting one or more services ;
and a redirector , communicatively connected to each of said computing elements , configured to serve as an email proxy for said plurality of computing elements ;
wherein said services are controlled by email messages routed by said redirector among said plurality of computing elements .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information (web pages) , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20030135618A1
CLAIM 10
. The network of claim 9 , wherein said redirector generates web pages (consumer worker information) related to said services for said web client .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (email messages) between the producer worker and the consumer worker .
US20030135618A1
CLAIM 1
. A computer network for providing services comprising : a plurality of computing elements each of which comprises computing resources for supporting one or more services ;
and a redirector , communicatively connected to each of said computing elements , configured to serve as an email proxy for said plurality of computing elements ;
wherein said services are controlled by email messages (second message) routed by said redirector among said plurality of computing elements .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information (web pages) , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20030135618A1
CLAIM 10
. The network of claim 9 , wherein said redirector generates web pages (consumer worker information) related to said services for said web client .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one (computing resources) or more of : the consumer worker associated with the message request and the datacenter queue associated with the message request .
US20030135618A1
CLAIM 1
. A computer network for providing services comprising : a plurality of computing elements each of which comprises computing resources (identifying one) for supporting one or more services ;
and a redirector , communicatively connected to each of said computing elements , configured to serve as an email proxy for said plurality of computing elements ;
wherein said services are controlled by email messages routed by said redirector among said plurality of computing elements .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
EP1347390A1

Filed: 2001-12-27     Issued: 2003-09-24

Framework system

(Original Assignee) Future System Consulting Corp     (Current Assignee) Future System Consulting Corp

K. c/o Future System Consulting Corp. ISHIBASHI, M c/o Future System Consulting Corp. MAESHIMA, N. c/o Future System Consulting Corp. OKUMURA, Isao c/o Future System Consulting Corp SAKASHITA, Yoko c/o Future System Consulting Corp. IGAKURA
US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (monitoring operation) from the datacenter queue , deleting the message from the datacenter queue .
EP1347390A1
CLAIM 11
A framework system according to claim 10 wherein the messaging services are respectively capable of monitoring operation (delete command) of one or more other messaging services ;
and , in the event that normal operation of at least one other messaging service fails to be detected , one or more messages is or are relayed to one or more normally operating other messaging services instead of to the at least one other messaging service .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (ring buffer) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
EP1347390A1
CLAIM 8
A framework system according to claim 7 wherein at least one of the messaging service or services has one or more ring buffer (store instructions) s , one or more of which is or are capable of temporarily delaying at least one of the one or more P to M messages .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (monitoring operation) from the datacenter queue , delete the message from the first server .
EP1347390A1
CLAIM 11
A framework system according to claim 10 wherein the messaging services are respectively capable of monitoring operation (delete command) of one or more other messaging services ;
and , in the event that normal operation of at least one other messaging service fails to be detected , one or more messages is or are relayed to one or more normally operating other messaging services instead of to the at least one other messaging service .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (queue management) between the producer worker and the consumer worker .
EP1347390A1
CLAIM 16
A framework system connected so as to be capable of communication with one or more clients , said system comprising : one or more framework services , one or more of which is or are capable of processing one or more request messages from at least one of the client or clients and of outputting one or more reply messages to at least one of the client or clients ;
and one or more messaging services interposed between one or more of the client or clients and one or more of the framework service or services and capable of relaying one or more messages between the client or clients and the framework service or services ;
the request message or messages being prioritized in a particular fashion ;
at least one of the messaging service or services comprising one or more message queues capable of temporarily delaying at least one of the request message or messages and one or more queue management (second message) components capable of managing input and/or output of at least one of the message queue or queues ;
and at least one of the queue management component or components being provided with a prioritized mode by which , at one or more times when a plurality of messages have been stored in one or more of the message queue or queues , the order or orders in which the plurality of messages are output from the message queue or queues is or are controlled in correspondence to the respective priority or priorities of the respective message or messages , and with a sequence protection mode by which , at one or more times when a plurality of messages have been stored in one or more of the message queue or queues , retrieval of one or more other messages stored in the message queue or queues is prohibited until completion of processing at one or more of the framework service or services of at least one message previously retrieved from the message queue or queues .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2001285287A

Filed: 2001-02-19     Issued: 2001-10-12

プレフィルタリング及びポストフィルタリングを利用したパブリッシュ/サブスクライブ装置及び方法

(Original Assignee) Agilent Technol Inc; アジレント・テクノロジーズ・インク     

Jerremy Holland, Graham S Pollock, Joseph S Sventek, グラハム・エス・ポロック, ジェレミー・ホランド, ジョセフ・エス・スヴェンティック
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (クライアント) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (クライアント) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
JP2001285287A
CLAIM 10
【請求項10】 第一のフィルタを有し、前記第一のフ ィルタを使用して申し込まれたメッセージタイプに対応 するチャネルインスタンスを生成するように作動してい るパブリッシャクライアント (first server, second server) と、 第二のフィルタを有し、あるメッセージタイプを申し込 み、前記対応するチャネルインスタンス内に含むメッセ ージを受信するように作動しているサブスクライバクラ イアントであって、前記第二のフィルタが前記メッセー ジタイプの属性を利用して前記特定のメッセージタイプ のインスタンスをフィルタリングするように作動してい る前記サブスクライバクライアントと、 前記パブリッシャクライアントと前記サブスクライバク ライアントとの間に伸びる通信経路と、 前記通信経路中にあり、サブスクライバクライアントが 第二のフィルタを介して受信用の前記対応するチャネル インスタンスを受信するように作動しているパブリッシ ュ/サブスクライブ機構とを含んでなるパブリッシュ/サ ブスクライブ装置。

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (クライアント) .
JP2001285287A
CLAIM 10
【請求項10】 第一のフィルタを有し、前記第一のフ ィルタを使用して申し込まれたメッセージタイプに対応 するチャネルインスタンスを生成するように作動してい るパブリッシャクライアント (first server, second server) と、 第二のフィルタを有し、あるメッセージタイプを申し込 み、前記対応するチャネルインスタンス内に含むメッセ ージを受信するように作動しているサブスクライバクラ イアントであって、前記第二のフィルタが前記メッセー ジタイプの属性を利用して前記特定のメッセージタイプ のインスタンスをフィルタリングするように作動してい る前記サブスクライバクライアントと、 前記パブリッシャクライアントと前記サブスクライバク ライアントとの間に伸びる通信経路と、 前記通信経路中にあり、サブスクライバクライアントが 第二のフィルタを介して受信用の前記対応するチャネル インスタンスを受信するように作動しているパブリッシ ュ/サブスクライブ機構とを含んでなるパブリッシュ/サ ブスクライブ装置。

US9479472B2
CLAIM 7
. A computing device to provide local processing (スクライブ装置) of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server (クライアント) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (クライアント) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
JP2001285287A
CLAIM 2
【請求項2】 前記第一のフィルタは、前記サブスクラ イバが申し込んだ要求したメッセージタイプを識別する ように構成した回路を含み、これにより前記パブリッシ ャが生成する前記メッセージインスタンスを識別したメ ッセージタイプを含んでいる請求項1に記載のパブリッ シュ/サブスクライブ装置 (local processing)

JP2001285287A
CLAIM 10
【請求項10】 第一のフィルタを有し、前記第一のフ ィルタを使用して申し込まれたメッセージタイプに対応 するチャネルインスタンスを生成するように作動してい るパブリッシャクライアント (first server, second server) と、 第二のフィルタを有し、あるメッセージタイプを申し込 み、前記対応するチャネルインスタンス内に含むメッセ ージを受信するように作動しているサブスクライバクラ イアントであって、前記第二のフィルタが前記メッセー ジタイプの属性を利用して前記特定のメッセージタイプ のインスタンスをフィルタリングするように作動してい る前記サブスクライバクライアントと、 前記パブリッシャクライアントと前記サブスクライバク ライアントとの間に伸びる通信経路と、 前記通信経路中にあり、サブスクライバクライアントが 第二のフィルタを介して受信用の前記対応するチャネル インスタンスを受信するように作動しているパブリッシ ュ/サブスクライブ機構とを含んでなるパブリッシュ/サ ブスクライブ装置。

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server (クライアント) .
JP2001285287A
CLAIM 10
【請求項10】 第一のフィルタを有し、前記第一のフ ィルタを使用して申し込まれたメッセージタイプに対応 するチャネルインスタンスを生成するように作動してい るパブリッシャクライアント (first server, second server) と、 第二のフィルタを有し、あるメッセージタイプを申し込 み、前記対応するチャネルインスタンス内に含むメッセ ージを受信するように作動しているサブスクライバクラ イアントであって、前記第二のフィルタが前記メッセー ジタイプの属性を利用して前記特定のメッセージタイプ のインスタンスをフィルタリングするように作動してい る前記サブスクライバクライアントと、 前記パブリッシャクライアントと前記サブスクライバク ライアントとの間に伸びる通信経路と、 前記通信経路中にあり、サブスクライバクライアントが 第二のフィルタを介して受信用の前記対応するチャネル インスタンスを受信するように作動しているパブリッシ ュ/サブスクライブ機構とを含んでなるパブリッシュ/サ ブスクライブ装置。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server (クライアント) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (クライアント) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
JP2001285287A
CLAIM 10
【請求項10】 第一のフィルタを有し、前記第一のフ ィルタを使用して申し込まれたメッセージタイプに対応 するチャネルインスタンスを生成するように作動してい るパブリッシャクライアント (first server, second server) と、 第二のフィルタを有し、あるメッセージタイプを申し込 み、前記対応するチャネルインスタンス内に含むメッセ ージを受信するように作動しているサブスクライバクラ イアントであって、前記第二のフィルタが前記メッセー ジタイプの属性を利用して前記特定のメッセージタイプ のインスタンスをフィルタリングするように作動してい る前記サブスクライバクライアントと、 前記パブリッシャクライアントと前記サブスクライバク ライアントとの間に伸びる通信経路と、 前記通信経路中にあり、サブスクライバクライアントが 第二のフィルタを介して受信用の前記対応するチャネル インスタンスを受信するように作動しているパブリッシ ュ/サブスクライブ機構とを含んでなるパブリッシュ/サ ブスクライブ装置。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20010025300A1

Filed: 2001-01-12     Issued: 2001-09-27

Methods and systems to manage and track the states of electronic media

(Original Assignee) Zaplet Inc     (Current Assignee) METRICSTREAM Inc

Graham Miller, Michael Hanson, Brian Axe, Steven Evans
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server (client terminals) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client terminals) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20010025300A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (client terminals) .
US20010025300A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server (client terminals) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client terminals) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20010025300A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server (client terminals) .
US20010025300A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (client terminals) to identify the producer worker associated with the message .
US20010025300A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (client terminals) to detect the datacenter queue associated with the message .
US20010025300A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (time stamp) , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20010025300A1
CLAIM 11
. The method of claim 1 further comprising setting a time stamp (producer worker information) to indicate when the notification is sent .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server (client terminals) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (client terminals) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20010025300A1
CLAIM 63
. A network system including a plurality of client terminals (first server, second server, network connection) , comprising : at least one data processing machine located at each of the client terminals , and computer software , residing on a computer readable medium at each machine to cause the machine to perform the following operations : parsing an electronic message in response to an open action ;
receiving an electronic medium from a server containing dynamic content ;
and one of tracking and managing of plurality of states of the electronic medium .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (time stamp) , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20010025300A1
CLAIM 11
. The method of claim 1 further comprising setting a time stamp (producer worker information) to indicate when the notification is sent .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
US20020120664A1

Filed: 2000-12-15     Issued: 2002-08-29

Scalable transaction processing pipeline

(Original Assignee) Aristos Logic Corp     (Current Assignee) Aristos Logic Corp

Robert Horn, Virgil Wilkins, Mark Myran, David Walls, Gnanashanmugam Elumalai, U?apos;Tee Cheah
US9479472B2
CLAIM 1
. A method to locally process queue requests (logical block address) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
US20020120664A1
CLAIM 18
. The system of claim 1 wherein at least one of the processing elements maps data addresses to logical block address (queue requests) es of a disk drive .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (logical block address) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
US20020120664A1
CLAIM 18
. The system of claim 1 wherein at least one of the processing elements maps data addresses to logical block address (queue requests) es of a disk drive .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information (first subset) associated with the producer worker , and datacenter queue information associated with the consumer worker .
US20020120664A1
CLAIM 9
. The system of claim 1 wherein two or more processing elements comprise a first subset (queue information) of the plurality of processing elements , wherein the first subset is adapted for processing a selected subtask of the plurality of subtasks , wherein each processing element of the first subset is adapted to process a portion of the selected subtask .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information (first subset) to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (queue management) between the producer worker and the consumer worker .
US20020120664A1
CLAIM 9
. The system of claim 1 wherein two or more processing elements comprise a first subset (queue information) of the plurality of processing elements , wherein the first subset is adapted for processing a selected subtask of the plurality of subtasks , wherein each processing element of the first subset is adapted to process a portion of the selected subtask .

US20020120664A1
CLAIM 25
. The system of claim 1 wherein the tasks are selected from the group consisting of : RAID requests ;
queue management (second message) commands , cache data request , read data requests , write data requests , block level read requests , block level write requests , file level data read requests , file level data write requests , directory structure commands , and database manipulation commands .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (logical block address) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
US20020120664A1
CLAIM 18
. The system of claim 1 wherein at least one of the processing elements maps data addresses to logical block address (queue requests) es of a disk drive .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information (first subset) associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
US20020120664A1
CLAIM 9
. The system of claim 1 wherein two or more processing elements comprise a first subset (queue information) of the plurality of processing elements , wherein the first subset is adapted for processing a selected subtask of the plurality of subtasks , wherein each processing element of the first subset is adapted to process a portion of the selected subtask .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2000155693A

Filed: 1998-11-18     Issued: 2000-06-06

メッセージ制御装置

(Original Assignee) Fujitsu Ltd; 富士通株式会社     

Hiroaki Komine, 浩昭 小峰, Noriyuki Yogoshi, 紀之 余越, Kazumasa Karaki, 一賢 唐木
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (管理テーブル, テーブル中) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (管理テーブル, テーブル中) .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker on a first virtual machine (えること) ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
JP2000155693A
CLAIM 2
【請求項2】異なるプロセスに属するオブジェクト間で メッセージを授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該ターゲットオブジェクトに該メ ッセージを与えること (first virtual machine) ができる最大スレッド割当数及び 現在の処理メッセージ数を各ターゲットオブジェクト毎 に管理するスレッド管理テーブルを有し、該処理メッセ ージ数が該最大スレッド割当数を越えたとき、対応する ターゲットオブジェクトの該メッセージの取り出しを待 機することを特徴としたメッセージ制御装置。

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (管理テーブル, テーブル中) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (管理テーブル, テーブル中) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (管理テーブル, テーブル中) based on the observed queue usage information .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (管理テーブル, テーブル中) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (管理テーブル, テーブル中) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (管理テーブル, テーブル中) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (管理テーブル, テーブル中) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
JP2000155693A
CLAIM 1
【請求項1】異なるプロセスに属するオブジェクト間で メッセージ授受するメッセージ制御装置おいて、 ターゲットオブジェクト毎に設けられたキューバッファ と、 該メッセージを該ターゲットオブジェクトに応じて該キ ューバッファに分配するメッセージ分配部と、 該キューバッファから該メッセージを取り出して対応す る該ターゲットオブジェクトに与えるべき複数のスレッ ドを同時に作成するスレッド制御部と、 を備え、 該スレッド制御部は、該プロセス内の全スレッドが処理 できる該メッセージの最大スレッド割当数及び現在の処 理メッセージ数を管理するスレッド管理テーブル (second server, queue user table) を有 し、該処理メッセージ数が該最大スレッド割当数を越え たとき、該メッセージの取り出しを待機することを特徴 としたメッセージ制御装置。

JP2000155693A
CLAIM 3
【請求項3】請求項1又は2において、 さらに、該プロセスにおけるCPUの稼働率に対応する該 最大スレッド割当数を保持するスレッド割当管理テーブ ルを含むスレッド割当制御部を有し、 該スレッド割当制御部が、一定時間毎に該CPU稼働率を 監視し、該スレッド割当管理テーブル中 (second server, queue user table) の該CPU稼働率 に対応した該最大スレッド割当数を該スレッド管理テー ブルの該最大スレッド割当数として指定することを特徴 としたメッセージ制御装置。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102930427A

Filed: 2012-11-15     Issued: 2013-02-13

日程管理方法及其移动终端

(Original Assignee) Huaqin Telecom Technology Co Ltd     (Current Assignee) Huaqin Telecom Technology Co Ltd

潘世行
US9479472B2
CLAIM 1
. A method to locally process queue requests (请求信息) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (包括访问) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息 (queue requests)

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (包括访问) and the datacenter queue associated with the message request .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US9479472B2
CLAIM 7
. A computing device to provide local processing (进行解析) of queue requests (请求信息) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问和设置日程管理的请求信息 (queue requests)

CN102930427A
CLAIM 14
. 一种移动终端,其特征在于,包括: 信息接收模块,用于接收服务器发送的所述推送信息; 解析模块,用于对接收到的推送信息进行解析 (local processing) ,解析出日程管理信息及设置日程管理的请求信息; 认证模块,用于对所述设置日程管理的请求信息中的号码进行查询,判断其是否为被授权号码;如果是授权号码,则移动终端向服务器发送接收回复信号;如果是非授权号码,则询问用户是否允许该号码用户访问本机日程管理模块;如果用户选择同意,则该号码被授权,同时会从服务器下载彩信,并解析其中的数据,将其设置到接收端的日程模块中;如果用户选择拒绝,则接收端会发信息通知服务器丢弃该条彩信; 日程管理模块,用于控制所述移动终端实现日程管理功能且在设定的时间执行设定的提醒操作,播放相关的多媒体文件。

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (包括访问) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (包括访问) that includes matching the consumer worker to the other datacenter queue .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (包括访问) .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (请求信息) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (包括访问) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息 (queue requests)

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (包括访问) that includes matching the consumer worker to the other datacenter queue .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (包括访问) and the datacenter queue associated with the message request .
CN102930427A
CLAIM 8
. 如权利要求I至4任一项所述的方法,其特征在于:所述推送信息包括访问 (message request) 和设置日程管理的请求信息。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102891779A

Filed: 2012-09-27     Issued: 2013-01-23

用于ip网络的大规模网络性能测量系统和方法

(Original Assignee) BEIJING WRD TECHNOLOGY Co Ltd     (Current Assignee) BEIJING WRD TECHNOLOGY Co Ltd

徐立人, 丛群
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (结果进行) at a first server (传输协议) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (结果进行) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 4
. 根据权利要求I所述的系统,其特征在于:所述服务器的通信模块网络探针接口与网络探针之间的交互通信使用超文本传输协议 (first server) HTTP,以保证服务质量QoS和便利于灵活配置防火墙:测量过程中,由网络探针定期发起传输控制协议TCP连接,通过域名方式连接服务器,使得该系统能够穿越网关的网络地址转换NAT,进行最大程度的跨网络测量;且因采用域名访问方式,使得服务器能够在域名系统DNS层实现负载均衡,并保证服务器的IP地址能够实现切换与迁移;网络探针使用HTTP POST方式将测量任务和测量结果放入小型数据封装格式的数据包载荷内发送给服务器;再由服务器向网络探针返回HTTP响应的数据包,以避免出现嵌入式网络探针出现内存不足的现象。

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker (结果进行) associated with the message request and the datacenter queue associated with the message request .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (结果进行) prior to storing the message in the queue cache at the second server .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (结果进行) on a first virtual machine ;

and executing the consumer worker (结果进行) on a second virtual machine (周期时间) , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 8
. 根据权利要求6所述的测量方法,其特征在于:所述步骤2包括下列操作内容: (21)网络探针向DNS服务器查询得到服务器的IP地址后,向服务器发起TCP连接,以使双方建立连接; (22)连接建立后,网络探针将上一次测量结果包含于HTTP数据包中,再发送给服务器;所述测量结果数据使用资料交换语言JSON进行格式化,并置于HTTP POST报文载荷中; (23)服务器接收到网络探针的测试结果后,从数据库中该网络探针的任务队列中取出下一周期的测量任务,再使用JSON对其进行格式化后,置于HTTP返回报文中,发送给网络探针; (24)网络探针接收到测量任务的报文后,从JSON格式中解析数据,再存入任务队列,并结束本次TCP连接; (25)网络探针根据接收到的测量任务进行主动网络测量操作;直到设置的周期时间 (second virtual machine, virtual machine manager) 达到后,返回执行步骤(22),网络探针与服务器交互通信,将本次测量结果包含于HTTP数据包中,发送给服务器;然后开始新的测量周期。

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (周期时间) (VMM) application , wherein the VMM application is configured to : detect a producer worker (结果进行) at a first server (传输协议) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker (结果进行) at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 4
. 根据权利要求I所述的系统,其特征在于:所述服务器的通信模块网络探针接口与网络探针之间的交互通信使用超文本传输协议 (first server) HTTP,以保证服务质量QoS和便利于灵活配置防火墙:测量过程中,由网络探针定期发起传输控制协议TCP连接,通过域名方式连接服务器,使得该系统能够穿越网关的网络地址转换NAT,进行最大程度的跨网络测量;且因采用域名访问方式,使得服务器能够在域名系统DNS层实现负载均衡,并保证服务器的IP地址能够实现切换与迁移;网络探针使用HTTP POST方式将测量任务和测量结果放入小型数据封装格式的数据包载荷内发送给服务器;再由服务器向网络探针返回HTTP响应的数据包,以避免出现嵌入式网络探针出现内存不足的现象。

CN102891779A
CLAIM 8
. 根据权利要求6所述的测量方法,其特征在于:所述步骤2包括下列操作内容: (21)网络探针向DNS服务器查询得到服务器的IP地址后,向服务器发起TCP连接,以使双方建立连接; (22)连接建立后,网络探针将上一次测量结果包含于HTTP数据包中,再发送给服务器;所述测量结果数据使用资料交换语言JSON进行格式化,并置于HTTP POST报文载荷中; (23)服务器接收到网络探针的测试结果后,从数据库中该网络探针的任务队列中取出下一周期的测量任务,再使用JSON对其进行格式化后,置于HTTP返回报文中,发送给网络探针; (24)网络探针接收到测量任务的报文后,从JSON格式中解析数据,再存入任务队列,并结束本次TCP连接; (25)网络探针根据接收到的测量任务进行主动网络测量操作;直到设置的周期时间 (second virtual machine, virtual machine manager) 达到后,返回执行步骤(22),网络探针与服务器交互通信,将本次测量结果包含于HTTP数据包中,发送给服务器;然后开始新的测量周期。

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server (传输协议) .
CN102891779A
CLAIM 4
. 根据权利要求I所述的系统,其特征在于:所述服务器的通信模块网络探针接口与网络探针之间的交互通信使用超文本传输协议 (first server) HTTP,以保证服务质量QoS和便利于灵活配置防火墙:测量过程中,由网络探针定期发起传输控制协议TCP连接,通过域名方式连接服务器,使得该系统能够穿越网关的网络地址转换NAT,进行最大程度的跨网络测量;且因采用域名访问方式,使得服务器能够在域名系统DNS层实现负载均衡,并保证服务器的IP地址能够实现切换与迁移;网络探针使用HTTP POST方式将测量任务和测量结果放入小型数据封装格式的数据包载荷内发送给服务器;再由服务器向网络探针返回HTTP响应的数据包,以避免出现嵌入式网络探针出现内存不足的现象。

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker (结果进行) executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (结果进行) associated with the message .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (结果进行) information , consumer worker (结果进行) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker (结果进行) and consumer worker (结果进行) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 4
. 根据权利要求I所述的系统,其特征在于:所述服务器的通信模块网络探针接口与网络探针之间的交互通信使用超文本传输协议HTTP,以保证服务质量QoS和便利于灵活配置防火墙:测量过程 (determine matching producer worker) 中,由网络探针定期发起传输控制协议TCP连接,通过域名方式连接服务器,使得该系统能够穿越网关的网络地址转换NAT,进行最大程度的跨网络测量;且因采用域名访问方式,使得服务器能够在域名系统DNS层实现负载均衡,并保证服务器的IP地址能够实现切换与迁移;网络探针使用HTTP POST方式将测量任务和测量结果放入小型数据封装格式的数据包载荷内发送给服务器;再由服务器向网络探针返回HTTP响应的数据包,以避免出现嵌入式网络探针出现内存不足的现象。

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker (结果进行) pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (结果进行) and the consumer worker .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (结果进行) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker (结果进行) in response to the message request .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (结果进行) at a first server (传输协议) , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker (结果进行) at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

CN102891779A
CLAIM 4
. 根据权利要求I所述的系统,其特征在于:所述服务器的通信模块网络探针接口与网络探针之间的交互通信使用超文本传输协议 (first server) HTTP,以保证服务质量QoS和便利于灵活配置防火墙:测量过程中,由网络探针定期发起传输控制协议TCP连接,通过域名方式连接服务器,使得该系统能够穿越网关的网络地址转换NAT,进行最大程度的跨网络测量;且因采用域名访问方式,使得服务器能够在域名系统DNS层实现负载均衡,并保证服务器的IP地址能够实现切换与迁移;网络探针使用HTTP POST方式将测量任务和测量结果放入小型数据封装格式的数据包载荷内发送给服务器;再由服务器向网络探针返回HTTP响应的数据包,以避免出现嵌入式网络探针出现内存不足的现象。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (结果进行) information , consumer worker (结果进行) information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (结果进行) and consumer worker (结果进行) pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker (结果进行) associated with the message request and the datacenter queue associated with the message request .
CN102891779A
CLAIM 1
. 一种用于IP网络的大规模网络性能测量系统,其特征在于:所述系统是由呈服务器与客户端架构的位于核心网的服务器与位于被测网络中的多个网络探针所组成的;其中: 服务器,由具有海量网络数据处理能力的计算机或服务器组成,用于生成测量任务,并与网络探针进行交互通信,下发测量任务和获取测量结果,并对网络探针上传的测量结果进行 (producer worker, consumer worker, producer worker information, consumer worker information, consumer worker pairs) 汇总和呈现;设有:用户接口模块、任务调度模块、通信模块和数据库四个部件; 网络探针,由具有网络测量能力、并能与服务器交互通信和呈分布式集群的嵌入式设备或计算机组成,用于接收和执行来自服务器的测量任务,并将测量结果上报服务器;该网络探针执行、完成的网络测量以主动网络测量为主:通过向网络中发送数据、观察传输状况、所需时间和结果来判断网络状态;设有通信模块、任务调度模块和测试模块三个部件。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102800014A

Filed: 2012-07-13     Issued: 2012-11-28

一种用于供应链融资的金融数据处理方法

(Original Assignee) BEIJING TEAMSUN SOFTWARE TECHNOLOGY Co Ltd; Beijing Teamsun Technology Co Ltd     (Current Assignee) BEIJING TEAMSUN SOFTWARE TECHNOLOGY Co Ltd ; Beijing Teamsun Technology Co Ltd

吴林, 马东平, 胡联奎
US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel (数据通道) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN102800014A
CLAIM 1
. 一种用于供应链融资的金融数据处理方法,其特征在于包括如下步骤: 步骤一、针对不同的企业信息系统建立多种相应的数据通道 (command channel) , 步骤二、针对每种数据通道建立相应的适配器,通过适配器将经由相应数据通道取来的不同类型的数据按照类型存储到缓存器中; 步骤三、根据融资系统所需要的数据类型以及格式建立一张逻辑上是矩阵关系的XML表 ;
步骤四、读取缓存器中不同类型的数据,依据所述XML表中的矩阵关系将适配器通过不同数据通道获取的不同类型的数据转换成融资系统所需的数据类型。

US9479472B2
CLAIM 7
. A computing device to provide local processing (的缓存) of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel (数据通道) associated with the datacenter queue .
CN102800014A
CLAIM 1
. 一种用于供应链融资的金融数据处理方法,其特征在于包括如下步骤: 步骤一、针对不同的企业信息系统建立多种相应的数据通道 (command channel) , 步骤二、针对每种数据通道建立相应的适配器,通过适配器将经由相应数据通道取来的不同类型的数据按照类型存储到缓存器中; 步骤三、根据融资系统所需要的数据类型以及格式建立一张逻辑上是矩阵关系的XML表 ;
步骤四、读取缓存器中不同类型的数据,依据所述XML表中的矩阵关系将适配器通过不同数据通道获取的不同类型的数据转换成融资系统所需的数据类型。

CN102800014A
CLAIM 3
. 根据权利要求I所述的用于供应链融资的金融数据处理方法,其特征在于所述适配器设有缓存数组,其根据读取到的数据类型按照if switch语句匹配数据类型存储到相应的缓存 (computing device to provide local processing) 空间中。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel (数据通道) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN102800014A
CLAIM 1
. 一种用于供应链融资的金融数据处理方法,其特征在于包括如下步骤: 步骤一、针对不同的企业信息系统建立多种相应的数据通道 (command channel) , 步骤二、针对每种数据通道建立相应的适配器,通过适配器将经由相应数据通道取来的不同类型的数据按照类型存储到缓存器中; 步骤三、根据融资系统所需要的数据类型以及格式建立一张逻辑上是矩阵关系的XML表 ;
步骤四、读取缓存器中不同类型的数据,依据所述XML表中的矩阵关系将适配器通过不同数据通道获取的不同类型的数据转换成融资系统所需的数据类型。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102622426A

Filed: 2012-02-27     Issued: 2012-08-01

数据库写入系统及方法

(Original Assignee) HANGZHOU SHANLIANG TECHNOLOGY Co Ltd     (Current Assignee) HANGZHOU SHANLIANG TECHNOLOGY Co Ltd

俞晓鸿
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache (个状态) at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
CN102622426A
CLAIM 3
. 如权利要求2所述的数据库写入系统,其特征在于:该任务备份模组中还存储日志文件,该日志文件用于记录每个任务在被触发到结束的各个状态 (queue cache) 以及异常。

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache (个状态) at the second server .
CN102622426A
CLAIM 3
. 如权利要求2所述的数据库写入系统,其特征在于:该任务备份模组中还存储日志文件,该日志文件用于记录每个任务在被触发到结束的各个状态 (queue cache) 以及异常。

US9479472B2
CLAIM 7
. A computing device to provide local processing (的缓存) of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache (个状态) at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN102622426A
CLAIM 1
. 一种数据库写入系统,至少包括: 控制模组,以于接收到写入请求时将该写入请求记录到线程池的子任务队列中,并将该写入请求的任务数据写入到缓存模组的缓存 (computing device to provide local processing) 中,同时,该控制模组还用于根据定时扫描模组的扫描结果向异步消息处理模组发送异步请求; 线程池,包括多个子任务队列,每个子任务队列存储多个任务; 缓存模组,用于存储该写入请求中的任务数据; 定时扫描模组,用于定时扫描该线程池中的子任务队列,以判断子任务队列是否已到释放时间,并于判断出某一子任务队列到释放时间时,将该子任务队列中的数据进行分组; 异步消息处理模组,处理该控制模组的异步请求,接收该子任务队列在释放过程中的参数信息并记录队列任务,并于监听到新任务后将相应的数据写入数据库;以及数据库,接收数据的写入。

CN102622426A
CLAIM 3
. 如权利要求2所述的数据库写入系统,其特征在于:该任务备份模组中还存储日志文件,该日志文件用于记录每个任务在被触发到结束的各个状态 (queue cache) 以及异常。

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (请求时) to identify the producer worker associated with the message .
CN102622426A
CLAIM 1
. 一种数据库写入系统,至少包括: 控制模组,以于接收到写入请求时 (network connection) 将该写入请求记录到线程池的子任务队列中,并将该写入请求的任务数据写入到缓存模组的缓存中,同时,该控制模组还用于根据定时扫描模组的扫描结果向异步消息处理模组发送异步请求; 线程池,包括多个子任务队列,每个子任务队列存储多个任务; 缓存模组,用于存储该写入请求中的任务数据; 定时扫描模组,用于定时扫描该线程池中的子任务队列,以判断子任务队列是否已到释放时间,并于判断出某一子任务队列到释放时间时,将该子任务队列中的数据进行分组; 异步消息处理模组,处理该控制模组的异步请求,接收该子任务队列在释放过程中的参数信息并记录队列任务,并于监听到新任务后将相应的数据写入数据库;以及数据库,接收数据的写入。

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (请求时) to detect the datacenter queue associated with the message .
CN102622426A
CLAIM 1
. 一种数据库写入系统,至少包括: 控制模组,以于接收到写入请求时 (network connection) 将该写入请求记录到线程池的子任务队列中,并将该写入请求的任务数据写入到缓存模组的缓存中,同时,该控制模组还用于根据定时扫描模组的扫描结果向异步消息处理模组发送异步请求; 线程池,包括多个子任务队列,每个子任务队列存储多个任务; 缓存模组,用于存储该写入请求中的任务数据; 定时扫描模组,用于定时扫描该线程池中的子任务队列,以判断子任务队列是否已到释放时间,并于判断出某一子任务队列到释放时间时,将该子任务队列中的数据进行分组; 异步消息处理模组,处理该控制模组的异步请求,接收该子任务队列在释放过程中的参数信息并记录队列任务,并于监听到新任务后将相应的数据写入数据库;以及数据库,接收数据的写入。

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache (个状态) ;

and provide the intercepted message to the consumer worker in response to the message request .
CN102622426A
CLAIM 3
. 如权利要求2所述的数据库写入系统,其特征在于:该任务备份模组中还存储日志文件,该日志文件用于记录每个任务在被触发到结束的各个状态 (queue cache) 以及异常。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache (个状态) at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN102622426A
CLAIM 3
. 如权利要求2所述的数据库写入系统,其特征在于:该任务备份模组中还存储日志文件,该日志文件用于记录每个任务在被触发到结束的各个状态 (queue cache) 以及异常。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102646064A

Filed: 2012-02-15     Issued: 2012-08-22

支持迁移的增量虚拟机备份

(Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC

A.贝斯巴鲁亚, C.L.埃克, S.K.D.鲍米克, H.S.苏泰, H.郝
US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (存储的指令) from the datacenter queue , deleting the message from the datacenter queue .
CN102646064A
CLAIM 11
. 一种计算机可读媒体,包括在其上存储的指令 (delete command, store instructions) ,所述指令响应于由计算设备的执行而使得该计算设备执行按照权利要求I一 5中任一项的方法。

US9479472B2
CLAIM 7
. A computing device (一种计算设备) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (存储的指令) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

CN102646064A
CLAIM 11
. 一种计算机可读媒体,包括在其上存储的指令 (delete command, store instructions) ,所述指令响应于由计算设备的执行而使得该计算设备执行按照权利要求I一 5中任一项的方法。

US9479472B2
CLAIM 8
. The computing device (一种计算设备) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (存储的指令) from the datacenter queue , delete the message from the first server .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

CN102646064A
CLAIM 11
. 一种计算机可读媒体,包括在其上存储的指令 (delete command, store instructions) ,所述指令响应于由计算设备的执行而使得该计算设备执行按照权利要求I一 5中任一项的方法。

US9479472B2
CLAIM 9
. The computing device (一种计算设备) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

US9479472B2
CLAIM 10
. The computing device (一种计算设备) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

US9479472B2
CLAIM 11
. The computing device (一种计算设备) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

US9479472B2
CLAIM 12
. The computing device (一种计算设备) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

US9479472B2
CLAIM 13
. The computing device (一种计算设备) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

US9479472B2
CLAIM 14
. The computing device (一种计算设备) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

US9479472B2
CLAIM 15
. The computing device (一种计算设备) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。

US9479472B2
CLAIM 16
. The computing device (一种计算设备) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
CN102646064A
CLAIM 6
. 一种计算设备 (computing device) ,包括: 一个或多个处理器(502) ;
以及 一个或多个计算机可读媒体(504),其上存储有多个指令,所述指令当由所述一个或多个处理器执行时使得所述一个或多个处理器: 接收(402)在第一主机设备上的虚拟机的快照的改变; 保留(404)自从虚拟机上次被备份以来所接收的所述快照的改变的记录; 识别(406)执行虚拟机的增量备份的时间; 响应于到了执行增量备份的时间,而根据改变的记录,对一部分快照进行备份(408) ;
识别(410)将虚拟机迁移到第二主机设备的时间;以及 响应于到了将虚拟机迁移到第二主机设备的时间,而将快照的改变的记录和快照中的一个或多个迁移(412)到第二主机设备。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102479108A

Filed: 2011-06-21     Issued: 2012-05-30

一种多应用进程的嵌入式系统终端资源管理系统及方法

(Original Assignee) Institute of Acoustics of CAS     (Current Assignee) Institute of Acoustics of CAS

孙鹏, 王海威, 张辉, 邓峰, 林军
US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (调度和) from the datacenter queue , deleting the message from the datacenter queue .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和 (delete command) 优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (终端的图像) (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN102479108A
CLAIM 9
. 一种多应用进程的嵌入式系统终端资源管理方法,该方法根据应用类型和用户使用应用的统计规律建立应用进程的动态优先级实现终端终端资源的竞争调度,所述的方法包含:建立应用进程的动态优先级的步骤,该步骤用于当多应用同时运行时,根据应用类型和用户使用应用的统计规律建立和调整应用进程的动态优先级;终端资源调度的步骤,当所述应用进程优先级发生变化时,进行终端资源竞争调度过程重新对终端资源进行竞争调度,优先保证高优先级应用的可靠运行;其中,所述的终端资源调度的步骤还包含:当系统中运行的应用较多而导致某种终端终端资源过载或冲突时或当有新应用启动或者有应用退出嵌入式系统时进行终端资源竞争调度过程,重新对所述终端终端资源进行竞争调度;所述的终端资源具体包含:终端的图像 (virtual machine manager) 资源和非图像资源。

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (调度和) from the datacenter queue , delete the message from the first server .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和 (delete command) 优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information (使用状态, 的使用) , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态 (queue usage information) ;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information (使用状态, 的使用) .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态 (queue usage information) ;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs (冲突时) through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN102479108A
CLAIM 1
. 一种多应用进程的嵌入式系统终端资源管理系统,该嵌入式系统终端资源管理系统包括:应用进程调度模块和终端资源调度模块,其特征在于,所述应用进程调度模块,用于当多应用同时运行时,根据应用类型和用户使用应用的统计规律建立应用进程的动态优先级;和所述终端资源调度模块,用于当嵌入式系统终端资源管理系统中运行的应用较多而导致该嵌入式系统终端资源管理系统中的终端资源过载或冲突时 (consumer worker pairs) ,触发该终端资源调度模块重新进行终端资源的优化分配和调度;当有新应用开始运行时,触发该终端资源调度模块进行终端资源的优化分配和调度,为所述应用程序分配终端资源;及当所述应用进程优先级发生变化时,触发该终端资源调度模块进行终端资源的优化分配和调度,优先保证用户的高优先级应用的可靠运行;其中,如果所述的应用进程调度模块发现某个应用进程优先级发生变化或有应用进程退出时,该应用进程调度模块通知所述终端资源调度模块重新进行所述终端终端资源的规划和调度,所述应用进程均通过所述终端资源调度模块提供的策略进行终端终端资源访问;所述终端资源包含:CPU、内存、硬盘、解码器、解复用器和图形引擎。

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs (冲突时) , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
CN102479108A
CLAIM 1
. 一种多应用进程的嵌入式系统终端资源管理系统,该嵌入式系统终端资源管理系统包括:应用进程调度模块和终端资源调度模块,其特征在于,所述应用进程调度模块,用于当多应用同时运行时,根据应用类型和用户使用应用的统计规律建立应用进程的动态优先级;和所述终端资源调度模块,用于当嵌入式系统终端资源管理系统中运行的应用较多而导致该嵌入式系统终端资源管理系统中的终端资源过载或冲突时 (consumer worker pairs) ,触发该终端资源调度模块重新进行终端资源的优化分配和调度;当有新应用开始运行时,触发该终端资源调度模块进行终端资源的优化分配和调度,为所述应用程序分配终端资源;及当所述应用进程优先级发生变化时,触发该终端资源调度模块进行终端资源的优化分配和调度,优先保证用户的高优先级应用的可靠运行;其中,如果所述的应用进程调度模块发现某个应用进程优先级发生变化或有应用进程退出时,该应用进程调度模块通知所述终端资源调度模块重新进行所述终端终端资源的规划和调度,所述应用进程均通过所述终端资源调度模块提供的策略进行终端终端资源访问;所述终端资源包含:CPU、内存、硬盘、解码器、解复用器和图形引擎。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information (使用状态, 的使用) , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
CN102479108A
CLAIM 7
. 根据权利要求1所述的多应用进程的嵌入式系统终端资源管理系统,其特征在于, 所述终端资源调度模块进一步包含:终端资源规划子模块,当某种所述终端终端资源过载或冲突时,实现该终端资源的竞争调度和优化分配,生成相应的应用进程调度列表;终端资源监控子模块,用于系统开机时终端资源收集所述嵌入式系统的终端终端资源信息,建立所述终端终端资源的状态列表,并进行实时监控,维护所述终端终端资源的使用状态 (queue usage information) ;终端资源分配子模块,用于为运行中的应用进程提供终端资源访问的控制方法;和终端资源信息维护子模块,用于维护所述终端终端资源的状态列表。

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs (冲突时) through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN102479108A
CLAIM 1
. 一种多应用进程的嵌入式系统终端资源管理系统,该嵌入式系统终端资源管理系统包括:应用进程调度模块和终端资源调度模块,其特征在于,所述应用进程调度模块,用于当多应用同时运行时,根据应用类型和用户使用应用的统计规律建立应用进程的动态优先级;和所述终端资源调度模块,用于当嵌入式系统终端资源管理系统中运行的应用较多而导致该嵌入式系统终端资源管理系统中的终端资源过载或冲突时 (consumer worker pairs) ,触发该终端资源调度模块重新进行终端资源的优化分配和调度;当有新应用开始运行时,触发该终端资源调度模块进行终端资源的优化分配和调度,为所述应用程序分配终端资源;及当所述应用进程优先级发生变化时,触发该终端资源调度模块进行终端资源的优化分配和调度,优先保证用户的高优先级应用的可靠运行;其中,如果所述的应用进程调度模块发现某个应用进程优先级发生变化或有应用进程退出时,该应用进程调度模块通知所述终端资源调度模块重新进行所述终端终端资源的规划和调度,所述应用进程均通过所述终端资源调度模块提供的策略进行终端终端资源访问;所述终端资源包含:CPU、内存、硬盘、解码器、解复用器和图形引擎。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN102741843A

Filed: 2011-03-22     Issued: 2012-10-17

从数据库中读取数据的方法及装置

(Original Assignee) Qingdao Hisense Media Network Technology Co Ltd     (Current Assignee) Juhaokan Technology Co Ltd

王震
US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions (数据更新) ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN102741843A
CLAIM 7
. 根据权利要求I至6中任一项所述的方法,其特征在于,所述从所述对应的消息队列中读取所述缓存节点对应的更新数据信息,根据所述更新数据信息更新所述缓存节点中的数据,包括: 确定所述更新数据信息写入到消息队列中的形式; 当所述更新数据信息以更新数据记录标识的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据记录标识,根据所述更新数据记录标识从数据库中获取所述更新数据记录标识对应的更新数据,并根据所述更新数据更新 (store instructions) 所述缓存节点中的数据; 当所述更新数据信息以更新数据的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据,并根据所述更新数据更新所述缓存节点中的数据。

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table (标识对应, 的标识) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
CN102741843A
CLAIM 4
. 根据权利要求I所述的方法,其特征在于,所述当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器,将所述数据表的更新数据信息和所述数据表需要缓存的缓存节点标识,写入到所述数据表对应的消息队列中,包括: 当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器获取所述数据表的更新数据信息,并根据所述数据表的标识 (queue user table) 查询所述缓存节点数据表,获取缓存所述数据表的缓存节点标识; 将所述更新数据信息和所述缓存节点标识写入到所述数据表对应的消息队列中。

CN102741843A
CLAIM 7
. 根据权利要求I至6中任一项所述的方法,其特征在于,所述从所述对应的消息队列中读取所述缓存节点对应的更新数据信息,根据所述更新数据信息更新所述缓存节点中的数据,包括: 确定所述更新数据信息写入到消息队列中的形式; 当所述更新数据信息以更新数据记录标识的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据记录标识,根据所述更新数据记录标识从数据库中获取所述更新数据记录标识对应 (queue user table) 的更新数据,并根据所述更新数据更新所述缓存节点中的数据; 当所述更新数据信息以更新数据的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据,并根据所述更新数据更新所述缓存节点中的数据。

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table (标识对应, 的标识) based on the observed queue usage information .
CN102741843A
CLAIM 4
. 根据权利要求I所述的方法,其特征在于,所述当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器,将所述数据表的更新数据信息和所述数据表需要缓存的缓存节点标识,写入到所述数据表对应的消息队列中,包括: 当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器获取所述数据表的更新数据信息,并根据所述数据表的标识 (queue user table) 查询所述缓存节点数据表,获取缓存所述数据表的缓存节点标识; 将所述更新数据信息和所述缓存节点标识写入到所述数据表对应的消息队列中。

CN102741843A
CLAIM 7
. 根据权利要求I至6中任一项所述的方法,其特征在于,所述从所述对应的消息队列中读取所述缓存节点对应的更新数据信息,根据所述更新数据信息更新所述缓存节点中的数据,包括: 确定所述更新数据信息写入到消息队列中的形式; 当所述更新数据信息以更新数据记录标识的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据记录标识,根据所述更新数据记录标识从数据库中获取所述更新数据记录标识对应 (queue user table) 的更新数据,并根据所述更新数据更新所述缓存节点中的数据; 当所述更新数据信息以更新数据的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据,并根据所述更新数据更新所述缓存节点中的数据。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table (标识对应, 的标识) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN102741843A
CLAIM 4
. 根据权利要求I所述的方法,其特征在于,所述当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器,将所述数据表的更新数据信息和所述数据表需要缓存的缓存节点标识,写入到所述数据表对应的消息队列中,包括: 当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器获取所述数据表的更新数据信息,并根据所述数据表的标识 (queue user table) 查询所述缓存节点数据表,获取缓存所述数据表的缓存节点标识; 将所述更新数据信息和所述缓存节点标识写入到所述数据表对应的消息队列中。

CN102741843A
CLAIM 7
. 根据权利要求I至6中任一项所述的方法,其特征在于,所述从所述对应的消息队列中读取所述缓存节点对应的更新数据信息,根据所述更新数据信息更新所述缓存节点中的数据,包括: 确定所述更新数据信息写入到消息队列中的形式; 当所述更新数据信息以更新数据记录标识的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据记录标识,根据所述更新数据记录标识从数据库中获取所述更新数据记录标识对应 (queue user table) 的更新数据,并根据所述更新数据更新所述缓存节点中的数据; 当所述更新数据信息以更新数据的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据,并根据所述更新数据更新所述缓存节点中的数据。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table (标识对应, 的标识) based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
CN102741843A
CLAIM 4
. 根据权利要求I所述的方法,其特征在于,所述当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器,将所述数据表的更新数据信息和所述数据表需要缓存的缓存节点标识,写入到所述数据表对应的消息队列中,包括: 当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器获取所述数据表的更新数据信息,并根据所述数据表的标识 (queue user table) 查询所述缓存节点数据表,获取缓存所述数据表的缓存节点标识; 将所述更新数据信息和所述缓存节点标识写入到所述数据表对应的消息队列中。

CN102741843A
CLAIM 7
. 根据权利要求I至6中任一项所述的方法,其特征在于,所述从所述对应的消息队列中读取所述缓存节点对应的更新数据信息,根据所述更新数据信息更新所述缓存节点中的数据,包括: 确定所述更新数据信息写入到消息队列中的形式; 当所述更新数据信息以更新数据记录标识的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据记录标识,根据所述更新数据记录标识从数据库中获取所述更新数据记录标识对应 (queue user table) 的更新数据,并根据所述更新数据更新所述缓存节点中的数据; 当所述更新数据信息以更新数据的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据,并根据所述更新数据更新所述缓存节点中的数据。

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table (标识对应, 的标识) through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
CN102741843A
CLAIM 4
. 根据权利要求I所述的方法,其特征在于,所述当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器,将所述数据表的更新数据信息和所述数据表需要缓存的缓存节点标识,写入到所述数据表对应的消息队列中,包括: 当应用程序更新数据库中的数据时,更新数据的数据表对应的触发器获取所述数据表的更新数据信息,并根据所述数据表的标识 (queue user table) 查询所述缓存节点数据表,获取缓存所述数据表的缓存节点标识; 将所述更新数据信息和所述缓存节点标识写入到所述数据表对应的消息队列中。

CN102741843A
CLAIM 7
. 根据权利要求I至6中任一项所述的方法,其特征在于,所述从所述对应的消息队列中读取所述缓存节点对应的更新数据信息,根据所述更新数据信息更新所述缓存节点中的数据,包括: 确定所述更新数据信息写入到消息队列中的形式; 当所述更新数据信息以更新数据记录标识的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据记录标识,根据所述更新数据记录标识从数据库中获取所述更新数据记录标识对应 (queue user table) 的更新数据,并根据所述更新数据更新所述缓存节点中的数据; 当所述更新数据信息以更新数据的形式写入到所述数据表对应的消息队列中时,从所述对应的消息队列中读取所述缓存节点对应的更新数据,并根据所述更新数据更新所述缓存节点中的数据。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
WO2012101464A1

Filed: 2011-01-28     Issued: 2012-08-02

Method for queuing data packets and node therefore

(Original Assignee) Telefonaktiebolaget L M Ericsson (Publ)     

Andreas Johnsson, Svante Ekelin, Christofer Flinta
US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
WO2012101464A1
CLAIM 8
. A node which receives and queues data packets , said node comprising : an output queue configured to store (identify one) said data packets for subsequent transmission ;
and a queue jumping module configured to evaluate at least one received data packet and to place said at least one received data packet at either an end of said output queue or between two data packets which are already stored in said output queue based , at least in part , upon a delay between a transmission of said two packets .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (received data packet) , consumer worker information , datacenter queue information (transmission time) associated with the producer worker , and datacenter queue information associated with the consumer worker .
WO2012101464A1
CLAIM 5
. The method of claim 1 , wherein said step of determining further comprises : determining whether said delay is greater than or equal to a total transmission time (datacenter queue information) of said at least one data packet .

WO2012101464A1
CLAIM 8
. A node which receives and queues data packets , said node comprising : an output queue configured to store said data packets for subsequent transmission ;
and a queue jumping module configured to evaluate at least one received data packet (producer worker information) and to place said at least one received data packet at either an end of said output queue or between two data packets which are already stored in said output queue based , at least in part , upon a delay between a transmission of said two packets .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information (received data packet) , consumer worker information , datacenter queue information (transmission time) associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
WO2012101464A1
CLAIM 5
. The method of claim 1 , wherein said step of determining further comprises : determining whether said delay is greater than or equal to a total transmission time (datacenter queue information) of said at least one data packet .

WO2012101464A1
CLAIM 8
. A node which receives and queues data packets , said node comprising : an output queue configured to store said data packets for subsequent transmission ;
and a queue jumping module configured to evaluate at least one received data packet (producer worker information) and to place said at least one received data packet at either an end of said output queue or between two data packets which are already stored in said output queue based , at least in part , upon a delay between a transmission of said two packets .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2012155440A

Filed: 2011-01-25     Issued: 2012-08-16

相互結合網制御システム、相互結合網制御方法

(Original Assignee) Nec Corp; 日本電気株式会社     

Takeya Fujimoto, 壮也 藤本
US9479472B2
CLAIM 7
. A computing device (システム) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 8
. The computing device (システム) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 9
. The computing device (システム) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 10
. The computing device (システム) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 11
. The computing device (システム) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 12
. The computing device (システム) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 13
. The computing device (システム) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 14
. The computing device (システム) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 15
. The computing device (システム) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 16
. The computing device (システム) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム (computing device)

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker (出力先) and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
JP2012155440A
CLAIM 1
複数の入力ポートと複数の出力ポートとを有し、該入力ポートから入力された情報を該情報の出力先 (determining matching producer worker) である出力ポートに出力する相互結合網と、 該入力ポートに入力される情報に対し、該情報の出力先である出力ポート毎に、該情報の読出順序を定める順序情報を付与する順序情報制御部と、 該出力ポートから出力された情報を蓄積する順序保証バッファと、 該順序保証バッファに蓄積された情報を、該順序情報により定められる順序にしたがって読出す読出制御部とを有する、 相互結合網制御システム。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
KR20120111734A

Filed: 2010-12-14     Issued: 2012-10-10

프로세서 코어들의 하이퍼바이저 격리

(Original Assignee) 어드밴스드 마이크로 디바이시즈, 인코포레이티드     

케이스 에이. 로웨리, 에릭 불린, 벤자민 씨. 세레브린, 토마스 알. 울러, 패트? 카민스키
US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application (virtual machine monitor) is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application (virtual machine monitor) is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application (virtual machine monitor) is further configured to : update the queue user table based on the observed queue usage information .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application (virtual machine monitor) is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application (virtual machine monitor) is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application (virtual machine monitor) is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
KR20120111734A
CLAIM 1
컴퓨터 시스템의 복수의 코어(core)들 중의 하나 이상의 코어들을 포함하는 제1의 코어 서브세트(subset) 상에서 오퍼레이팅 시스템(operating system)을 실행하는 단계와 , 여기서 상기 오퍼레이팅 시스템은 가상 머신 모니터(virtual machine monitor (VMM application) )의 제어 하에서 게스트(guest)로서 실행되며 ;
그리고 상기 복수의 코어들 중의 하나 이상의 코어들을 포함하는 제2의 코어 서브세트 상에서 애플리케이션을 위한 작업을 실행하는 단계를 포함하여 구성되며 , 상기 제1의 코어 서브세트와 상기 제2의 코어 서브세트는 상호 배타적(exclusive)이고 , 상기 제2의 코어 서브세트는 상기 오퍼레이팅 시스템에 대해 가시적(visible)이지 않은 것을 특징으로 하는 방법 .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2012108576A

Filed: 2010-11-15     Issued: 2012-06-07

マルチコアプロセッサ、処理実行方法、プログラム

(Original Assignee) Toyota Motor Corp; トヨタ自動車株式会社     

Eisuke Ando, 栄祐 安藤
US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel (の命令) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
JP2012108576A
CLAIM 2
各処理の処理終了アドレスが登録された処理終了アドレス情報と、 コアが前記処理終了アドレスの命令 (command channel, delete command) を実行した場合に、処理の終了を検出する終了判定手段と、を有し、 前記進度状況更新手段は、前記終了判定手段が終了を検出した処理の前記進度状況を実行完了に更新する、請求項1記載のマルチコアプロセッサ。

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (の命令) from the datacenter queue , deleting the message from the datacenter queue .
JP2012108576A
CLAIM 2
各処理の処理終了アドレスが登録された処理終了アドレス情報と、 コアが前記処理終了アドレスの命令 (command channel, delete command) を実行した場合に、処理の終了を検出する終了判定手段と、を有し、 前記進度状況更新手段は、前記終了判定手段が終了を検出した処理の前記進度状況を実行完了に更新する、請求項1記載のマルチコアプロセッサ。

US9479472B2
CLAIM 7
. A computing device (優先順) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel (の命令) associated with the datacenter queue .
JP2012108576A
CLAIM 2
各処理の処理終了アドレスが登録された処理終了アドレス情報と、 コアが前記処理終了アドレスの命令 (command channel, delete command) を実行した場合に、処理の終了を検出する終了判定手段と、を有し、 前記進度状況更新手段は、前記終了判定手段が終了を検出した処理の前記進度状況を実行完了に更新する、請求項1記載のマルチコアプロセッサ。

JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 8
. The computing device (優先順) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (の命令) from the datacenter queue , delete the message from the first server .
JP2012108576A
CLAIM 2
各処理の処理終了アドレスが登録された処理終了アドレス情報と、 コアが前記処理終了アドレスの命令 (command channel, delete command) を実行した場合に、処理の終了を検出する終了判定手段と、を有し、 前記進度状況更新手段は、前記終了判定手段が終了を検出した処理の前記進度状況を実行完了に更新する、請求項1記載のマルチコアプロセッサ。

JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 9
. The computing device (優先順) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 10
. The computing device (優先順) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 11
. The computing device (優先順) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 12
. The computing device (優先順) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 13
. The computing device (優先順) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 14
. The computing device (優先順) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 15
. The computing device (優先順) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker and the consumer worker .
JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 16
. The computing device (優先順) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
JP2012108576A
CLAIM 12
前記記憶手段にはアプリケーションの他にプログラムが記憶されており、 前記アプリケーションと前記プログラムの優先順 (computing device) 位に応じて、前記アプリケーションと前記プログラムを動的に前記コアに割り当てるスケジュール手段を有する、 請求項1〜11いずれか1項記載のマルチコアプロセッサ。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel (の命令) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
JP2012108576A
CLAIM 2
各処理の処理終了アドレスが登録された処理終了アドレス情報と、 コアが前記処理終了アドレスの命令 (command channel, delete command) を実行した場合に、処理の終了を検出する終了判定手段と、を有し、 前記進度状況更新手段は、前記終了判定手段が終了を検出した処理の前記進度状況を実行完了に更新する、請求項1記載のマルチコアプロセッサ。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
WO2011071624A2

Filed: 2010-11-05     Issued: 2011-06-16

Cloud computing monitoring and management system

(Original Assignee) Microsoft Corporation     

Bradley Wheeler, Bryan Griffin
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote device) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
WO2011071624A2
CLAIM 14
. A method performed by a cloud based runtime environment , said method comprising : linking to a cloud application executing in a cloud environment ;
receiving information to transmit to a monitoring application , said monitoring application being located on a remote device (second server) ;
evaluating said information to determine an information type ;
creating a message comprising at least a portion of said information , said message having a predefined format ;
and transmitting said message to said monitoring application .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (remote device) .
WO2011071624A2
CLAIM 14
. A method performed by a cloud based runtime environment , said method comprising : linking to a cloud application executing in a cloud environment ;
receiving information to transmit to a monitoring application , said monitoring application being located on a remote device (second server) ;
evaluating said information to determine an information type ;
creating a message comprising at least a portion of said information , said message having a predefined format ;
and transmitting said message to said monitoring application .

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote device) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
WO2011071624A2
CLAIM 14
. A method performed by a cloud based runtime environment , said method comprising : linking to a cloud application executing in a cloud environment ;
receiving information to transmit to a monitoring application , said monitoring application being located on a remote device (second server) ;
evaluating said information to determine an information type ;
creating a message comprising at least a portion of said information , said message having a predefined format ;
and transmitting said message to said monitoring application .

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one (configured to store) or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
WO2011071624A2
CLAIM 3
. The cloud computing environment of claim 2 , said message queuing system comprising a message queue configured to store (identify one) said messages until transmitting said messages to said monitoring application .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (remote device) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
WO2011071624A2
CLAIM 14
. A method performed by a cloud based runtime environment , said method comprising : linking to a cloud application executing in a cloud environment ;
receiving information to transmit to a monitoring application , said monitoring application being located on a remote device (second server) ;
evaluating said information to determine an information type ;
creating a message comprising at least a portion of said information , said message having a predefined format ;
and transmitting said message to said monitoring application .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN101923491A

Filed: 2010-08-11     Issued: 2010-12-22

多核环境下线程组地址空间调度和切换线程的方法

(Original Assignee) Shanghai Jiaotong University     (Current Assignee) Shanghai Jiaotong University

过敏意, 李阳, 王稳寅, 丁孟为, 杨蓝麒, 伍倩, 沈耀
US9479472B2
CLAIM 1
. A method to locally process queue requests (包含的) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache (当线程) at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的 (queue requests) 线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程 (queue cache) 被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command (调度和) from the datacenter queue , deleting the message from the datacenter queue .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和 (delete command) 切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache (当线程) at the second server .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程 (queue cache) 被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests (包含的) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache (当线程) at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的 (queue requests) 线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程 (queue cache) 被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US9479472B2
CLAIM 8
. The computing device of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command (调度和) from the datacenter queue , delete the message from the first server .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和 (delete command) 切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache (当线程) ;

and provide the intercepted message to the consumer worker in response to the message request .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程 (queue cache) 被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (包含的) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache (当线程) at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN101923491A
CLAIM 1
一种多核环境下线程组地址空间调度和切换线程的方法,其特征在于,包括以下步骤:第一步,对每个进程所包含的 (queue requests) 线程进行线程组划分处理,得到若干线程组;第二步,对线程组进行分配处理,分别得到每个线程组的CPU核,并将每个线程组送入对应的本地队列;第三步,运行各个CPU核,当线程 (queue cache) 被动态创建或删除时,对线程组进行维持处理,得到处理后的线程组;否则,执行第四步;第四步,当当前线程时间片用完时,调度并切换线程,返回第三步;否则,当当前线程阻塞,就绪队列为空且负载不平衡时,进行线程迁移,然后进行调度并切换线程,返回第三步;当当前线程阻塞,就绪队列不为空或负载平衡时,直接进行调度并切换线程,返回第三步;当当前线程未阻塞但停机时,线程调度结束;当当前线程未阻塞且未停机时,返回第三步;当阻塞线程恢复为就绪状态时,找到结构队列中向前离该线程最近的就绪线程u,并插入线程u之前。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN101699806A

Filed: 2009-10-27     Issued: 2010-04-28

网间消息互通网关、系统及方法

(Original Assignee) ZTE Corp     (Current Assignee) ZTE Corp

汪林风
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (失败消息) to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
CN101699806A
CLAIM 5
. 如权利要求3所述的网间消息互通网关,其特征在于,所述格式转换模块还包括转 换消息生成单元,用于生成与当前转换的消息格式相同的转换成功/失败消息 (message request) ,所述转换 成功/失败消息通过所述转换消息转发单元转发给所述发送模块。

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request (失败消息) and the datacenter queue associated with the message request .
CN101699806A
CLAIM 5
. 如权利要求3所述的网间消息互通网关,其特征在于,所述格式转换模块还包括转 换消息生成单元,用于生成与当前转换的消息格式相同的转换成功/失败消息 (message request) ,所述转换 成功/失败消息通过所述转换消息转发单元转发给所述发送模块。

US9479472B2
CLAIM 9
. The computing device of claim 7 , wherein the VMM application is further configured to : detect a message request (失败消息) sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
CN101699806A
CLAIM 5
. 如权利要求3所述的网间消息互通网关,其特征在于,所述格式转换模块还包括转 换消息生成单元,用于生成与当前转换的消息格式相同的转换成功/失败消息 (message request) ,所述转换 成功/失败消息通过所述转换消息转发单元转发给所述发送模块。

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (在对接) through a network connection to identify the producer worker associated with the message .
CN101699806A
CLAIM 12
. 如权利要求11所述的网间消息互通方法,其特征在于,所述消息网关在对接 (network traffic) 收到 的消息进行格式转换前,还判断所述消息是否是本网关的业务,若是,则对所述消息进行格 式转换;否则,丢弃所述消息,生成与所述消息格式相同的提示消息,返回给所述消息的发 送方。

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic (在对接) through a network connection to detect the datacenter queue associated with the message .
CN101699806A
CLAIM 12
. 如权利要求11所述的网间消息互通方法,其特征在于,所述消息网关在对接 (network traffic) 收到 的消息进行格式转换前,还判断所述消息是否是本网关的业务,若是,则对所述消息进行格 式转换;否则,丢弃所述消息,生成与所述消息格式相同的提示消息,返回给所述消息的发 送方。

US9479472B2
CLAIM 12
. The computing device of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information (接收消息) , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
CN101699806A
CLAIM 2
. 如权利要求1所述的网间消息互通网关,其特征在于,所述接收模块包括: 业务判断单元,用于判断接收到的邮件消息和/或多媒体消息是否是所述网间消息互通网关的业务;接收消息 (queue usage information) 转发单元,用于将是所述网间消息互通网关业务的邮件消息和/或多媒体消 息转发给所述格式转换模块。

US9479472B2
CLAIM 13
. The computing device of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information (接收消息) .
CN101699806A
CLAIM 2
. 如权利要求1所述的网间消息互通网关,其特征在于,所述接收模块包括: 业务判断单元,用于判断接收到的邮件消息和/或多媒体消息是否是所述网间消息互通网关的业务;接收消息 (queue usage information) 转发单元,用于将是所述网间消息互通网关业务的邮件消息和/或多媒体消 息转发给所述格式转换模块。

US9479472B2
CLAIM 14
. The computing device of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (失败消息) that includes matching the consumer worker to the other datacenter queue .
CN101699806A
CLAIM 5
. 如权利要求3所述的网间消息互通网关,其特征在于,所述格式转换模块还包括转 换消息生成单元,用于生成与当前转换的消息格式相同的转换成功/失败消息 (message request) ,所述转换 成功/失败消息通过所述转换消息转发单元转发给所述发送模块。

US9479472B2
CLAIM 15
. The computing device of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (的多媒体消息) between the producer worker and the consumer worker .
CN101699806A
CLAIM 1
一种网间消息互通网关,包括接收模块、发送模块,其特征在于,还包括格式转换模块;所述接收模块用于接收邮件消息和/或多媒体消息;所述格式转换模块用于将所述邮件消息转换成多媒体消息;或者将所述多媒体消息转换成邮件消息;所述发送模块用于输出由格式转换模块所转换得到的多媒体消息 (second message) 或邮件消息。

US9479472B2
CLAIM 16
. The computing device of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request (失败消息) .
CN101699806A
CLAIM 5
. 如权利要求3所述的网间消息互通网关,其特征在于,所述格式转换模块还包括转 换消息生成单元,用于生成与当前转换的消息格式相同的转换成功/失败消息 (message request) ,所述转换 成功/失败消息通过所述转换消息转发单元转发给所述发送模块。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request (失败消息) to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN101699806A
CLAIM 5
. 如权利要求3所述的网间消息互通网关,其特征在于,所述格式转换模块还包括转 换消息生成单元,用于生成与当前转换的消息格式相同的转换成功/失败消息 (message request) ,所述转换 成功/失败消息通过所述转换消息转发单元转发给所述发送模块。

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information (接收消息) , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
CN101699806A
CLAIM 2
. 如权利要求1所述的网间消息互通网关,其特征在于,所述接收模块包括: 业务判断单元,用于判断接收到的邮件消息和/或多媒体消息是否是所述网间消息互通网关的业务;接收消息 (queue usage information) 转发单元,用于将是所述网间消息互通网关业务的邮件消息和/或多媒体消 息转发给所述格式转换模块。

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request (失败消息) that includes matching the consumer worker to the other datacenter queue .
CN101699806A
CLAIM 5
. 如权利要求3所述的网间消息互通网关,其特征在于,所述格式转换模块还包括转 换消息生成单元,用于生成与当前转换的消息格式相同的转换成功/失败消息 (message request) ,所述转换 成功/失败消息通过所述转换消息转发单元转发给所述发送模块。

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request (失败消息) and the datacenter queue associated with the message request .
CN101699806A
CLAIM 5
. 如权利要求3所述的网间消息互通网关,其特征在于,所述格式转换模块还包括转 换消息生成单元,用于生成与当前转换的消息格式相同的转换成功/失败消息 (message request) ,所述转换 成功/失败消息通过所述转换消息转发单元转发给所述发送模块。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
JP2010278484A

Filed: 2009-05-26     Issued: 2010-12-09

メール中継装置

(Original Assignee) Hitachi Ltd; 株式会社日立製作所     

Toshiyuki Kamiya, Masafumi Kinoshita, Takafumi Koike, 隆文 小池, 雅文 木下, 俊之 神谷
US9479472B2
CLAIM 1
. A method to locally process queue requests (の要求) from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
JP2010278484A
CLAIM 1
メールを保管し、通信端末からの要求 (queue requests) に応じて、前記通信端末をあて先とするメールを配信するメール管理サーバと、メール転送サーバと、前記通信端末と、に接続される中継装置であって、 前記メール転送サーバからメールを受信し、 受信した前記メールについて、圧縮の効果が規定値を超えると判断すれば、前記メールを、メールボディとメールヘッダとを含めて圧縮して、圧縮メールのボディを作成し、 前記圧縮に関わる圧縮情報と、前記メールヘッダに含まれる情報と、を含む圧縮メールのヘッダを作成し、 前記圧縮メールのヘッダと前記圧縮メールのボディとを含む圧縮メールを前記メール管理サーバへ送信し、保管させ、 前記圧縮の効果が規定値以下と判断すれば、受信した前記メールを前記メール管理サーバへ送信し、保管させる ことを特徴とする中継装置。

US9479472B2
CLAIM 7
. A computing device to provide local processing (のサイズ) of queue requests (の要求) from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
JP2010278484A
CLAIM 1
メールを保管し、通信端末からの要求 (queue requests) に応じて、前記通信端末をあて先とするメールを配信するメール管理サーバと、メール転送サーバと、前記通信端末と、に接続される中継装置であって、 前記メール転送サーバからメールを受信し、 受信した前記メールについて、圧縮の効果が規定値を超えると判断すれば、前記メールを、メールボディとメールヘッダとを含めて圧縮して、圧縮メールのボディを作成し、 前記圧縮に関わる圧縮情報と、前記メールヘッダに含まれる情報と、を含む圧縮メールのヘッダを作成し、 前記圧縮メールのヘッダと前記圧縮メールのボディとを含む圧縮メールを前記メール管理サーバへ送信し、保管させ、 前記圧縮の効果が規定値以下と判断すれば、受信した前記メールを前記メール管理サーバへ送信し、保管させる ことを特徴とする中継装置。

JP2010278484A
CLAIM 2
請求項1に記載の中継装置であって、 前記圧縮メールのヘッダを作成する際に、 前記メールヘッダに含まれる情報から、前記圧縮メールのヘッダに含める情報を、Message-IDを含めて選択し、 前記Message-IDに、前記圧縮メールのボディのサイズ (computing device to provide local processing) を、前記圧縮情報として追加する ことを特徴とする中継装置。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests (の要求) from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
JP2010278484A
CLAIM 1
メールを保管し、通信端末からの要求 (queue requests) に応じて、前記通信端末をあて先とするメールを配信するメール管理サーバと、メール転送サーバと、前記通信端末と、に接続される中継装置であって、 前記メール転送サーバからメールを受信し、 受信した前記メールについて、圧縮の効果が規定値を超えると判断すれば、前記メールを、メールボディとメールヘッダとを含めて圧縮して、圧縮メールのボディを作成し、 前記圧縮に関わる圧縮情報と、前記メールヘッダに含まれる情報と、を含む圧縮メールのヘッダを作成し、 前記圧縮メールのヘッダと前記圧縮メールのボディとを含む圧縮メールを前記メール管理サーバへ送信し、保管させ、 前記圧縮の効果が規定値以下と判断すれば、受信した前記メールを前記メール管理サーバへ送信し、保管させる ことを特徴とする中継装置。




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
WO2009111799A2

Filed: 2009-03-09     Issued: 2009-09-11

Globally distributed utility computing cloud

(Original Assignee) 3Tera, Inc.     

Peter Nickolov, Bert Armijo, Vladimir Miloushev
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (second virtual machine) ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
WO2009111799A2
CLAIM 35
. The method of claim 34 , wherein the first distributed application descriptor further defines at least one connection between a first virtual machine and a second virtual machine (second server, second message) .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker prior to storing the message in the queue cache at the second server (second virtual machine) .
WO2009111799A2
CLAIM 35
. The method of claim 34 , wherein the first distributed application descriptor further defines at least one connection between a first virtual machine and a second virtual machine (second server, second message) .

US9479472B2
CLAIM 7
. A computing device (application components) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (second virtual machine) ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
WO2009111799A2
CLAIM 35
. The method of claim 34 , wherein the first distributed application descriptor further defines at least one connection between a first virtual machine and a second virtual machine (second server, second message) .

WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

US9479472B2
CLAIM 8
. The computing device (application components) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

US9479472B2
CLAIM 9
. The computing device (application components) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

US9479472B2
CLAIM 10
. The computing device (application components) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (network connection) to identify the producer worker associated with the message .
WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

WO2009111799A2
CLAIM 95
. The system of claim 94 wherein the distributed application components includes at least one component selected from a group consisting of : virtual appliances , virtual machines , virtual interfaces , virtual volumes , and virtual network connection (network connection) s .

US9479472B2
CLAIM 11
. The computing device (application components) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (network connection) to detect the datacenter queue associated with the message .
WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

WO2009111799A2
CLAIM 95
. The system of claim 94 wherein the distributed application components includes at least one component selected from a group consisting of : virtual appliances , virtual machines , virtual interfaces , virtual volumes , and virtual network connection (network connection) s .

US9479472B2
CLAIM 12
. The computing device (application components) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

US9479472B2
CLAIM 13
. The computing device (application components) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

US9479472B2
CLAIM 14
. The computing device (application components) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

US9479472B2
CLAIM 15
. The computing device (application components) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (second virtual machine) between the producer worker and the consumer worker .
WO2009111799A2
CLAIM 35
. The method of claim 34 , wherein the first distributed application descriptor further defines at least one connection between a first virtual machine and a second virtual machine (second server, second message) .

WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

US9479472B2
CLAIM 16
. The computing device (application components) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
WO2009111799A2
CLAIM 94
. A system for offering distributed application components (computing device) via a computing network , the system comprising means for offering , via a computing network , distributed application components for use in deployment of one or more distributed applications at one or more server grids of the computing network .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server (second virtual machine) ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
WO2009111799A2
CLAIM 35
. The method of claim 34 , wherein the first distributed application descriptor further defines at least one connection between a first virtual machine and a second virtual machine (second server, second message) .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
WO2009026589A2

Filed: 2008-08-25     Issued: 2009-02-26

Method and/or system for providing and/or analizing and/or presenting decision strategies

(Original Assignee) Fred Cohen     

Fred Cohen
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker (defined condition, storage means, output field) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 5
. The method of claim 1 , further comprising : intercepting the message sent by the producer worker (defined condition, storage means, output field) prior to storing the message in the queue cache at the second server .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 6
. The method of claim 1 , further comprising : executing the producer worker (defined condition, storage means, output field) on a first virtual machine ;

and executing the consumer worker on a second virtual machine , wherein the first virtual machine is configured to be executed on a first physical hardware and the second virtual machine is configured to be executed on the first physical hardware .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 7
. A computing device (electronic data) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker (defined condition, storage means, output field) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 8
. The computing device (electronic data) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue , delete the message from the first server .
WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

US9479472B2
CLAIM 9
. The computing device (electronic data) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue associated with the message request .
WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

US9479472B2
CLAIM 10
. The computing device (electronic data) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker (defined condition, storage means, output field) associated with the message .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 11
. The computing device (electronic data) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue associated with the message .
WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

US9479472B2
CLAIM 12
. The computing device (electronic data) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (defined condition, storage means, output field) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 13
. The computing device (electronic data) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information .
WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

US9479472B2
CLAIM 14
. The computing device (electronic data) of claim 12 , wherein the VMM application is further configured to : determine matching producer (defined condition, storage means, output field) worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker (defined condition, storage means, output field) to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 15
. The computing device (electronic data) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer (defined condition, storage means, output field) and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message between the producer worker (defined condition, storage means, output field) and the consumer worker .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 16
. The computing device (electronic data) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker (defined condition, storage means, output field) ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 26
. An electronic data (computing device) file , recorded or transmitted on a fixed digital medium , that when loaded into an appropriately configured digital apparatus causes the apparatus to embody the system of any of claims 2 through 22 .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker (defined condition, storage means, output field) at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information , wherein the observed queue usage information includes one or more of producer worker (defined condition, storage means, output field) information , consumer worker information , datacenter queue information associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer (defined condition, storage means, output field) worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker (defined condition, storage means, output field) to another datacenter queue , and identify a message request that includes matching the consumer worker to the other datacenter queue .
WO2009026589A2
CLAIM 2
. The system of claim 1 further comprising : computer , mechanical , or other processing means ;
computer , mechanical , or other data storage means (matching producer, producer worker, producer worker information, determine matching producer worker) ;
computer , mechanical , or other input means ;
computer , mechanical , or other output means .

WO2009026589A2
CLAIM 23
. The system of any of claims 2 through 22 further wherein : an interactive graphical user interface , said interface comprising : a plurality of objects indicating factors ;
a plurality of input fields allowing input of data regarding a situation , decision , or option ;
a plurality of output field (matching producer, producer worker, producer worker information, determine matching producer worker) s for display data relating to said advice or strategies .

WO2009026589A2
CLAIM 28
. The system of claim 1 in which movement of factors are restricted , or restrictions on the addition , removal , or other alterations of factors are used to constrain inputs or situations based on previously set values or predefined condition (matching producer, producer worker, producer worker information, determine matching producer worker) s .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
WO2009014868A2

Filed: 2008-07-01     Issued: 2009-01-29

Scheduling threads in multi-core systems

(Original Assignee) Microsoft Corporation     

Yadhu Gopalan, Bor-Ming Hsieh, Mark Miller
US9479472B2
CLAIM 1
. A method to locally process queue requests from co-located workers in a datacenter , the method comprising : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (multi-core processor) at least partially stored at a second server ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

and providing the message to the consumer worker in response to the message request .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel associated with the datacenter queue (multi-core processor) ;

and modifying the message in response to receiving the signal .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 3
. The method of claim 2 , further comprising : in response to receiving a delete command from the datacenter queue (multi-core processor) , deleting the message from the datacenter queue .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 4
. The method of claim 1 , further comprising : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (multi-core processor) associated with the message request .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 7
. A computing device (computing device) to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (multi-core processor) at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel associated with the datacenter queue .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 8
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : modify the message in response to receiving the signal ;

and in response to receiving a delete command from the datacenter queue (multi-core processor) , delete the message from the first server .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 9
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : detect a message request sent from the consumer worker executing a virtual machine ;

and identify one or more of the consumer worker associated with the message request and the datacenter queue (multi-core processor) associated with the message request .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 10
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to identify the producer worker associated with the message .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 11
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection to detect the datacenter queue (multi-core processor) associated with the message .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 12
. The computing device (computing device) of claim 7 , wherein the VMM application is further configured to : construct a queue user table based on observed queue usage information (multi-core processing system) , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (multi-core processor) information (multi-core processing system) associated with the producer worker , and datacenter queue information associated with the consumer worker .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

WO2009014868A2
CLAIM 7
. The method of claim 6 , wherein the application is executed by one of : the multi-core processing system (queue usage information, datacenter queue information) (210) locally and another processing system remotely .

US9479472B2
CLAIM 13
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : update the queue user table based on the observed queue usage information (multi-core processing system) .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

WO2009014868A2
CLAIM 7
. The method of claim 6 , wherein the application is executed by one of : the multi-core processing system (queue usage information, datacenter queue information) (210) locally and another processing system remotely .

US9479472B2
CLAIM 14
. The computing device (computing device) of claim 12 , wherein the VMM application is further configured to : determine matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (multi-core processor) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 15
. The computing device (computing device) of claim 14 , wherein the VMM application is further configured to : in response to an identification of the matching producer and consumer worker pairs , provide matched queue information to an intercept module of the VMM application , wherein the intercept module increases a speed of handling a second message (processing time) between the producer worker and the consumer worker .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

WO2009014868A2
CLAIM 5
. The method of claim 3 , wherein the sequence number (336) is weighted based on one from a set of : a predefined increment , a system condition , a number of currently running applications , an expected processing time (second message) of the thread , and a core type .

US9479472B2
CLAIM 16
. The computing device (computing device) of claim 15 , wherein the intercept module of the VMM application is configured to : intercept the message sent by the producer worker ;

store the message in the queue cache ;

and provide the intercepted message to the consumer worker in response to the message request .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (computing device) (560) for scheduling threads in a multi-core processor system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue (multi-core processor) at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 18
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : constructing a queue user table based on observed queue usage information (multi-core processing system) , wherein the observed queue usage information includes one or more of producer worker information , consumer worker information , datacenter queue (multi-core processor) information (multi-core processing system) associated with the producer worker , and datacenter queue information associated with the consumer worker ;

and updating the queue user table based on the observed queue usage information .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

WO2009014868A2
CLAIM 7
. The method of claim 6 , wherein the application is executed by one of : the multi-core processing system (queue usage information, datacenter queue information) (210) locally and another processing system remotely .

US9479472B2
CLAIM 19
. The non-transitory computer-readable storage device of claim 18 , wherein the instructions , when executed by the processor , further comprise : determining matching producer worker and consumer worker pairs through use of the queue user table through a process to : identify a message that includes matching the producer worker to another datacenter queue (multi-core processor) , and identify a message request that includes matching the consumer worker to the other datacenter queue .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .

US9479472B2
CLAIM 20
. The non-transitory computer-readable storage device of claim 17 , wherein the instructions , when executed by the processor , further comprise : identifying one or more of : the consumer worker associated with the message request and the datacenter queue (multi-core processor) associated with the message request .
WO2009014868A2
CLAIM 1
. A method (600) to be executed at least in part in a computing device (560) for scheduling threads in a multi-core processor (datacenter queue) system (210) , the method (600) comprising : receiving (602) a thread (332) to be scheduled for processing by the processor system (210) ;
determining an affinity status of the received thread (332) ;
assigning (604) a sequence number (336) to the thread based on a time of arrival of the thread ;
and if the thread (332) has a fixed affinity for a particular core (212 , 214 , 216 , 218) , placing the thread (332) in a per-processor queue (222 , 224 , 226 , 228) for the particular core (212 , 214 , 216 , 218) ;
else placing (608) the thread in a global run queue (202 , 338) for all available cores (212 , 214 , 216 , 218) .




US9479472B2

Filed: 2013-02-28     Issued: 2016-10-25

Local message queue processing for co-located workers

(Original Assignee) Empire Technology Development LLC     (Current Assignee) INVINCIBLE IP LLC ; Ardent Research Corp

Ezekiel Kruglick
CN101216814A

Filed: 2007-12-26     Issued: 2008-07-09

一种多核多操作系统之间的通信方法及系统

(Original Assignee) Hangzhou H3C Technologies Co Ltd     (Current Assignee) New H3C Technologies Co Ltd

朱而刚
US9479472B2
CLAIM 2
. The method of claim 1 , further comprising : receiving a signal from a command channel (数据通道) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN101216814A
CLAIM 1
、一种多核多操作系统之间的通信系统,其特征在于,所述操作系统之间通过虚拟数据通道 (command channel) 进行数据传输,所述虚拟数据通道包括相互连接的操作系统对应的虚拟接口。

US9479472B2
CLAIM 7
. A computing device to provide local processing of queue requests from co-located workers , the computing device comprising : a memory configured to store instructions ;

and a processor coupled to the memory , the processor executing a virtual machine manager (VMM) application , wherein the VMM application is configured to : detect a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercept the message sent by the producer worker ;

store the message in a queue cache at the first server ;

detect a consumer worker at the first server ;

provide the message to the consumer worker ;

and receive a signal from a command channel (数据通道) associated with the datacenter queue .
CN101216814A
CLAIM 1
、一种多核多操作系统之间的通信系统,其特征在于,所述操作系统之间通过虚拟数据通道 (command channel) 进行数据传输,所述虚拟数据通道包括相互连接的操作系统对应的虚拟接口。

US9479472B2
CLAIM 10
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (相互连接) to identify the producer worker associated with the message .
CN101216814A
CLAIM 1
、一种多核多操作系统之间的通信系统,其特征在于,所述操作系统之间通过虚拟数据通道进行数据传输,所述虚拟数据通道包括相互连接 (network connection) 的操作系统对应的虚拟接口。

US9479472B2
CLAIM 11
. The computing device of claim 7 , wherein the VMM application is further configured to : observe network traffic through a network connection (相互连接) to detect the datacenter queue associated with the message .
CN101216814A
CLAIM 1
、一种多核多操作系统之间的通信系统,其特征在于,所述操作系统之间通过虚拟数据通道进行数据传输,所述虚拟数据通道包括相互连接 (network connection) 的操作系统对应的虚拟接口。

US9479472B2
CLAIM 17
. A non-transitory computer-readable storage device with instructions stored thereon to locally process queue requests from co-located workers in a datacenter , the instructions , when executed by a processor , comprise : detecting a producer worker at a first server , wherein the producer worker sends a message to a datacenter queue at least partially stored at a second server ;

intercepting the message sent by the producer worker ;

storing the message in a queue cache at the first server ;

detecting a consumer worker at the first server , wherein the consumer worker sends a message request to the datacenter queue ;

providing the message to the consumer worker in response to the message request ;

receiving a signal from a command channel (数据通道) associated with the datacenter queue ;

and modifying the message in response to receiving the signal .
CN101216814A
CLAIM 1
、一种多核多操作系统之间的通信系统,其特征在于,所述操作系统之间通过虚拟数据通道 (command channel) 进行数据传输,所述虚拟数据通道包括相互连接的操作系统对应的虚拟接口。