Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8336079B2
Filed: 2008-12-31
Issued: 2012-12-18
Patent Holder: (Original Assignee) Hytrust Inc     (Current Assignee) Hytrust Inc
Inventor(s): Renata Budko, Hemma Prafullchandra, Eric Ming Chiu, Boris Strongin

Title: Intelligent security control system for virtualized ecosystems

[FEATURE ID: 1] method, taskmechanism, procedure, device, system, computer, process, step[FEATURE ID: 1] method, control system
[TRANSITIVE ID: 2] assessing, storing, identifyingdetermining, analyzing, establishing, providing, monitoring, defining, obtaining[TRANSITIVE ID: 2] maintaining, evaluating, evaluation
[FEATURE ID: 3] runtime risk, rules, assessment policies, actions, explicit inputconditions, behavior, operations, policies, activities, data, attributes[FEATURE ID: 3] administrative control, logical assets, control information, behavioral, environmental attributes, properties, determination, controls, behaviors, information
[FEATURE ID: 4] application program, rule, action sequence, action, application action, form, computing system, operation, second actionapplication, activity, event, object, execution, item, element[FEATURE ID: 4] ecosystem, attempt, environment, entity, logical asset
[FEATURE ID: 5] device, rules database, policy database, application, processing device, user, execution, statesystem, hardware, network, logic, processor, controller, policy[FEATURE ID: 5] subject logical asset, computer, physical
[TRANSITIVE ID: 6] comprisingincluding, by, involves, performing, which, having, involving[TRANSITIVE ID: 6] comprising
[TRANSITIVE ID: 7] usingof, applying, the, regarding[TRANSITIVE ID: 7] attempting
[FEATURE ID: 8] runtime monitornetwork, hardware, computing, computer[FEATURE ID: 8] logical
[FEATURE ID: 9] claimfigure, claim of, paragraph, clam, the claim, clause, statement[FEATURE ID: 9] claim
[FEATURE ID: 10] activityoperation, work, task, functionality, processing, manipulation[FEATURE ID: 10] such manipulation
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 5]

, comprising [TRANSITIVE ID: 6]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 5]

, a plurality of rules [FEATURE ID: 3]

, wherein each rule [FEATURE ID: 4]

identifies an action sequence [FEATURE ID: 4]

; storing , in a policy database [FEATURE ID: 5]

, a plurality of assessment policies [FEATURE ID: 3]

, wherein each assessment policy includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 7]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified runtime risk indicates a risk or threat of the identified action sequence of the application [FEATURE ID: 5]

; and identifying , by a runtime monitor [FEATURE ID: 8]

including a processing device [FEATURE ID: 5]

, a behavior score for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence of at least two performed actions [FEATURE ID: 3]

, and each performed action [FEATURE ID: 4]

is at least one of : a user action , an application action [FEATURE ID: 4]

, and a system action . 2 . The method of claim [FEATURE ID: 9]

1 , wherein the user action is any form [FEATURE ID: 4]

of explicit input [FEATURE ID: 3]

from a user [FEATURE ID: 5]

of a computing system [FEATURE ID: 4]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 10]

performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution [FEATURE ID: 5]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 4]

performed by a computing system on behalf of , or as a consequence of , a user action or application action that changes the state [FEATURE ID: 5]

of the computing system . 5 . The method of claim 1 , wherein a first action of the at least two performed actions is performed at a first time , and the second action [FEATURE ID: 4]

1 . A method [FEATURE ID: 1]

for maintaining [TRANSITIVE ID: 2]

administrative control [FEATURE ID: 3]

over logical assets [FEATURE ID: 3]

in a virtualized ecosystem [FEATURE ID: 4]

, comprising [TRANSITIVE ID: 6]

: in response to an attempt [FEATURE ID: 4]

to manipulate a subject logical asset [FEATURE ID: 5]

of the virtualized ecosystem , evaluating [TRANSITIVE ID: 2]

, by a computer [FEATURE ID: 5]

- based control system [FEATURE ID: 1]

communicatively coupled to an underlying physical [FEATURE ID: 5]

, computer - based environment [FEATURE ID: 4]

abstracted by the virtualized ecosystem , control information [FEATURE ID: 3]

for the subject logical asset of the virtualized ecosystem and of an entity [FEATURE ID: 4]

attempting [TRANSITIVE ID: 7]

administrative manipulation of the subject local asset , wherein the control information includes contextual , behavioral [FEATURE ID: 3]

and environmental attributes [FEATURE ID: 3]

of the subject logical asset ; deriving from the evaluation [FEATURE ID: 2]

, contextualized properties [FEATURE ID: 3]

of the subject logical asset , wherein deriving the contextualized properties comprises determining whether ( i ) the entity attempting the administrative manipulation of the logical asset [FEATURE ID: 4]

has sufficient rights to perform such manipulation [FEATURE ID: 10]

, and ( ii ) the attempted administrative manipulation of the subject logical asset will result in a permissible communicative coupling with other logical assets of the virtualized ecosystem or permissible interaction between the subject logical asset and the underlying physical , computer - based environment , thereby determining whether the administrative manipulation is permissible ; and enforcing , by the control system and according to the determination [FEATURE ID: 3]

, controls [FEATURE ID: 3]

for the subject logical asset to permit or deny the attempted administrative manipulation . 2 . The method of claim [FEATURE ID: 9]

1 , wherein the controls are derived from existing controls for like logical assets of the virtualized ecosystem . 3 . The method of claim 1 , wherein the controls are inherited from like logical assets of the virtualized ecosystem . 4 . The method of claim 1 , further comprising enforcing learned behaviors [FEATURE ID: 3]

for the subject logical asset , said behaviors being learned from systems external to the virtualized ecosystem . 5 . The method of claim 4 , wherein the systems external to the virtualized ecosystem comprise instances of systems having control systems configured to share information [FEATURE ID: 3]

with the computer - based control system for the virtualized ecosystem as part of a virtual community . 6 . The method of claim 1 , wherein logical [FEATURE ID: 8]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8327441B2
Filed: 2011-02-17
Issued: 2012-12-04
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar, Gurudatt Shashikumar

Title: System and method for application attestation

[FEATURE ID: 1] method, device, rules database, runtime monitor, processing device, user, computing systemsystem, processor, platform, program, network, server, component[FEATURE ID: 1] method, computing platform, subsequent execution context
[TRANSITIVE ID: 2] assessing, storing, identifyingdetermining, obtaining, processing, establishing, defining, monitoring, computing[TRANSITIVE ID: 2] providing, receiving, indicating, generating, generating
[FEATURE ID: 3] runtime risk, statecharacteristics, parameters, conditions, properties, metrics, trustworthiness, security[FEATURE ID: 3] attributes, security information, security risks
[FEATURE ID: 4] application program, action, application action, formevent, activity, identifier, operation, agent, attestation, object[FEATURE ID: 4] attestation service, application, attestation server, execution analysis, attestation result, application artifact, introspective security context
[TRANSITIVE ID: 5] executes, identifies, includes, indicatesdescribes, reflects, provides, implements, defines, contains, determines[TRANSITIVE ID: 5] comprises
[TRANSITIVE ID: 6] comprising, usingincluding, by, through, containing, involving, having, with[TRANSITIVE ID: 6] using, comprising
[FEATURE ID: 7] rulesactions, instructions, rights, roles, restrictions, settings, conditions[FEATURE ID: 7] authorization rules
[FEATURE ID: 8] action sequenceactivity, interaction, association, behavior[FEATURE ID: 8] transaction
[FEATURE ID: 9] policy databaseschema, configuration, file, database[FEATURE ID: 9] runtime execution context
[FEATURE ID: 10] assessment policiespolicies, classifications, assessments, algorithms, indicators, metrics, rules[FEATURE ID: 10] security assertions, collaboration services
[FEATURE ID: 11] sequencelist, collection, series, subset, plurality[FEATURE ID: 11] set
[FEATURE ID: 12] actionsprocesses, applications, functions, features, instances, classes, configurations[FEATURE ID: 12] executable file binaries, components, subsequent changes
[FEATURE ID: 13] claimfigure, aspect, item, paragraph, the claim, clam, clause[FEATURE ID: 13] claim
[FEATURE ID: 14] explicit inputdata, request, notification, communication, signal, submission, application[FEATURE ID: 14] report, attestation results, network access
[FEATURE ID: 15] taskuser, thread, process, node[FEATURE ID: 15] parent
[FEATURE ID: 16] consequenceproxy, target, substitute, cause, surrogate, precursor[FEATURE ID: 16] reference
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes [TRANSITIVE ID: 5]

on a device [FEATURE ID: 1]

, comprising [TRANSITIVE ID: 6]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 1]

, a plurality of rules [FEATURE ID: 7]

, wherein each rule identifies [TRANSITIVE ID: 5]

an action sequence [FEATURE ID: 8]

; storing , in a policy database [FEATURE ID: 9]

, a plurality of assessment policies [FEATURE ID: 10]

, wherein each assessment policy includes [TRANSITIVE ID: 5]

at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 6]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified runtime risk indicates [TRANSITIVE ID: 5]

a risk or threat of the identified action sequence of the application ; and identifying , by a runtime monitor [FEATURE ID: 1]

including a processing device [FEATURE ID: 1]

, a behavior score for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence [FEATURE ID: 11]

of at least two performed actions [FEATURE ID: 12]

, and each performed action [FEATURE ID: 4]

is at least one of : a user action , an application action [FEATURE ID: 4]

, and a system action . 2 . The method of claim [FEATURE ID: 13]

1 , wherein the user action is any form [FEATURE ID: 4]

of explicit input [FEATURE ID: 14]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 1]

. 3 . The method of claim 1 , wherein the application action is any activity performed by the application initiated programmatically by a task [FEATURE ID: 15]

in execution of a computing system . 4 . The method of claim 1 , wherein the system action is any operation performed by a computing system on behalf of , or as a consequence [FEATURE ID: 16]

of , a user action or application action that changes the state [FEATURE ID: 3]

1 . A method [FEATURE ID: 1]

of providing [TRANSITIVE ID: 2]

an attestation service [FEATURE ID: 4]

for an application [FEATURE ID: 4]

at runtime executing on a computing platform [FEATURE ID: 1]

using [TRANSITIVE ID: 6]

an attestation server [FEATURE ID: 4]

, comprising [TRANSITIVE ID: 6]

: receiving [TRANSITIVE ID: 2]

, by the attestation server remote from the computing platform : a runtime execution context [FEATURE ID: 9]

indicating [TRANSITIVE ID: 2]

attributes [FEATURE ID: 3]

of the application at runtime , wherein the attributes comprise one or more executable file binaries [FEATURE ID: 12]

of the application and loaded components [FEATURE ID: 12]

of the application ; and a security context providing security information [FEATURE ID: 3]

about the application , wherein the security information comprises [TRANSITIVE ID: 5]

an execution analysis [FEATURE ID: 4]

of the one or more executable file binaries and the loaded components ; generating [TRANSITIVE ID: 2]

, by the attestation server , a report [FEATURE ID: 14]

indicating security risks [FEATURE ID: 3]

associated with the application based on the received runtime execution context and the received security context , as an attestation result [FEATURE ID: 4]

; and sending , by the attestation server , the attestation result associated with the application . 2 . The method of claim [FEATURE ID: 13]

1 , further comprising : generating , by the attestation server , an application artifact [FEATURE ID: 4]

as a reference [FEATURE ID: 16]

for changes in a subsequent execution context [FEATURE ID: 1]

; and sending the generated application artifact such that subsequent changes [FEATURE ID: 12]

to the runtime execution context are tracked based on the generated application artifact . 3 . The method according to claim 2 , further comprising digitally signing the attestation results [FEATURE ID: 14]

, wherein the attributes further comprise parent [FEATURE ID: 15]

- child process associations of the application . 4 . The method of claim 1 , wherein the received security context is an introspective security context [FEATURE ID: 4]

, and wherein the execution analysis is a static , dynamic , or virtual analysis of the one or more executable file binaries and the loaded components . 5 . The method of claim 4 , wherein the generating [FEATURE ID: 2]

of the report indicating security risks associated with the application includes generating , by the attestation server , one or more security assertions [FEATURE ID: 10]

that pertain to the received runtime execution context and the received introspective security context . 6 . The method according to claim 1 , further comprising authenticating the application using a plurality of collaboration services [FEATURE ID: 10]

. 7 . The method according to claim 1 , further comprising controlling a user ' s transaction [FEATURE ID: 8]

with the application by applying a set [FEATURE ID: 11]

of authorization rules [FEATURE ID: 7]

in accordance with the attestation results . 8 . The method according to claim 1 , further comprising controlling a user ' s network access [FEATURE ID: 14]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8291500B1
Filed: 2012-03-29
Issued: 2012-10-16
Patent Holder: (Original Assignee) Cyber Engr Services Inc     (Current Assignee) Cyber Engr Services Inc
Inventor(s): Hermes Bojaxhi, Joseph Drissel, Daniel Raygoza

Title: Systems and methods for automated malware artifact retrieval and analysis

[FEATURE ID: 1] method, rules database, policy database, processing device, user, task, operationsystem, processor, program, computer, server, node, step[FEATURE ID: 1] computerized method, malware artifact file, device, host, victim computing device
[TRANSITIVE ID: 2] assessing, storing, identifyingmonitoring, analyzing, detecting, defining, capturing, verifying, establishing[TRANSITIVE ID: 2] processing, receiving, identifying, retrieving, determining
[FEATURE ID: 3] runtime risk, behalf, stateperformance, functionality, operations, initiation, data, processing, detection[FEATURE ID: 3] program code, execution
[FEATURE ID: 4] application programapplication, entity, apparatus, agent, output, asset, object[FEATURE ID: 4] electronic data store, analyzer device separate, attacker computing device
[FEATURE ID: 5] devicehost, node, system, source, public, network, service[FEATURE ID: 5] accessible network resource, party
[TRANSITIVE ID: 6] comprising, usingincluding, with, by, the, of, through, performing[TRANSITIVE ID: 6] comprising
[FEATURE ID: 7] rulesinstructions, scripts, orders, data, parameters, routines, protocols[FEATURE ID: 7] commands, malware
[FEATURE ID: 8] rulerequest, program, key, record[FEATURE ID: 8] file
[FEATURE ID: 9] action sequencealgorithm, attribute, entity, element, effect[FEATURE ID: 9] transformation
[FEATURE ID: 10] leastor, minus, lest, last, lease, most[FEATURE ID: 10] least
[TRANSITIVE ID: 11] identifiedreceived, selected, specified, indicated, generated[TRANSITIVE ID: 11] stored
[FEATURE ID: 12] runtime monitornode, machine, process, client[FEATURE ID: 12] file name
[FEATURE ID: 13] actionsinstructions, transactions, events, tasks, commands[FEATURE ID: 13] files
[FEATURE ID: 14] action, application actionoperation, event, task, activity, effect, function, application[FEATURE ID: 14] instruction
[FEATURE ID: 15] claimitem, clause, paragraph, figure, formula, aspect, claim of[FEATURE ID: 15] claim
[FEATURE ID: 16] explicit input, activityrequest, data, instruction, message, code, information, query[FEATURE ID: 16] user input, command, first universal resource locator
[FEATURE ID: 17] computing systemresource, command, user, request[FEATURE ID: 17] universal resource locator
[FEATURE ID: 18] executionsoftware, memory, hardware, one[FEATURE ID: 18] data
[FEATURE ID: 19] consequencetarget, result, source, function[FEATURE ID: 19] command type
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 5]

, comprising [TRANSITIVE ID: 6]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 1]

, a plurality of rules [FEATURE ID: 7]

, wherein each rule [FEATURE ID: 8]

identifies an action sequence [FEATURE ID: 9]

; storing , in a policy database [FEATURE ID: 1]

, a plurality of assessment policies , wherein each assessment policy includes at least [FEATURE ID: 10]

one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 6]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified [TRANSITIVE ID: 11]

runtime risk indicates a risk or threat of the identified action sequence of the application ; and identifying , by a runtime monitor [FEATURE ID: 12]

including a processing device [FEATURE ID: 1]

, a behavior score for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence of at least two performed actions [FEATURE ID: 13]

, and each performed action [FEATURE ID: 14]

is at least one of : a user action , an application action [FEATURE ID: 14]

, and a system action . 2 . The method of claim [FEATURE ID: 15]

1 , wherein the user action is any form of explicit input [FEATURE ID: 16]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 17]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 16]

performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution [FEATURE ID: 18]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 1]

performed by a computing system on behalf [FEATURE ID: 3]

of , or as a consequence [FEATURE ID: 19]

of , a user action or application action that changes the state [FEATURE ID: 3]

1 . A computerized method [FEATURE ID: 1]

for automatically processing [TRANSITIVE ID: 2]

a plurality of files [FEATURE ID: 13]

, comprising [TRANSITIVE ID: 6]

: receiving [TRANSITIVE ID: 2]

user input [FEATURE ID: 16]

comprising a universal resource locator [FEATURE ID: 17]

, the universal resource locator identifying [TRANSITIVE ID: 2]

a malware artifact file [FEATURE ID: 1]

at a command [FEATURE ID: 16]

and control node ; retrieving [TRANSITIVE ID: 2]

the malware artifact file stored [TRANSITIVE ID: 11]

at the command and control node ; determining [TRANSITIVE ID: 2]

whether the malware artifact file is at least [FEATURE ID: 10]

partially obfuscated ; decoding the malware artifact file to reverse at least one obfuscating transformation [FEATURE ID: 9]

if the malware artifact file is at least partially obfuscated ; storing the malware artifact file in an electronic data store [FEATURE ID: 4]

; and analyzing the malware artifact file retrieved from command and control node at an analyzer device separate [FEATURE ID: 4]

from the command and control node and a victim computing device [FEATURE ID: 1]

to determine whether it contains a command stored therein , the command being exchanged between an attacker computing device and the victim computing device . 2 . The computerized method of claim [FEATURE ID: 15]

1 , further comprising : processing the malware artifact file to identify a second universal resource locator identifying a second malware artifact file at a second command and control node ; retrieving the second malware artifact file stored at the second command and control node ; and storing the second malware artifact file in the electronic data store . 3 . The computerized method of claim 1 , further comprising : predicting a second universal resource locator identifying a second malware artifact file at a second command and control node , the prediction based on the first universal resource locator [FEATURE ID: 16]

, and wherein the second universal resource locator identifies a host [FEATURE ID: 1]

and a file name [FEATURE ID: 12]

, and generating the second universal resource locator based on the prediction . 4 . The computerized method of claim 1 , wherein : the malware artifact file comprises data [FEATURE ID: 18]

having been exfiltrated from a victim computing device by an attacker , and wherein the malware artifact file is a file [FEATURE ID: 8]

uploaded by a victim computing device to the command and control server , and wherein the command is a command to transfer a file . 5 . The computerized method of claim 1 , wherein the malware artifact file comprises one or more commands [FEATURE ID: 7]

provided by an attacker computing device . 6 . The computerized method of claim 5 , wherein the one or more commands provided by an attacker computing device [FEATURE ID: 4]

comprise at least one instruction [FEATURE ID: 14]

directed to a victim computing device . 7 . The computerized method of claim 1 , wherein the malware artifact file further comprises program code [FEATURE ID: 3]

for controlling execution [FEATURE ID: 3]

of malware [FEATURE ID: 7]

on a victim computing device . 8 . The computerized method of claim 7 , wherein the program code comprises an instruction to a victim computing device [FEATURE ID: 1]

to upload one or more files to the command and control node . 9 . The computerized method of claim 1 , wherein the command and control node is in communication with a victim computing device and an attacker computing device . 10 . The computerized method of claim 1 , wherein the command and control node is a publicly accessible network resource [FEATURE ID: 5]

and accessing the command and control node does not legally constitute access without authorization by a third - party [FEATURE ID: 5]

. 11 . The computerized method of claim 1 , further comprising : analyzing the command to determine a command type [FEATURE ID: 19]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: EP2501099A1
Filed: 2011-03-17
Issued: 2012-09-19
Patent Holder: (Original Assignee) Skunk Worx BV     (Current Assignee) Skunk Worx BV
Inventor(s): Mark Willem Loman, Erik Jan Loman, Victor Marinus Johann Simon Van Hillo

Title: Method and system for detecting malicious web content

[FEATURE ID: 1] method, rules database, tasksystem, computer, node, step, processor, procedure, scheduler[FEATURE ID: 1] method, device
[TRANSITIVE ID: 2] assessing, storing, identifyingdetecting, defining, establishing, monitoring, verifying, checking, providing[TRANSITIVE ID: 2] determining, receiving, indicating
[FEATURE ID: 3] runtime risk, rules, assessment policiesconditions, policies, instructions, criteria, patterns, threats, settings[FEATURE ID: 3] malware, antivirus packages
[FEATURE ID: 4] application programenterprise, application, infrastructure, network, apparatus, appliance, entity[FEATURE ID: 4] environment, antivirus service, external network
[FEATURE ID: 5] device, runtime monitor, processing device, user, computing systemclient, network, display, machine, controller, browser, program[FEATURE ID: 5] second device, second network, web server
[TRANSITIVE ID: 6] comprising, usingwith, including, having, providing, for, of, to[TRANSITIVE ID: 6] comprising
[FEATURE ID: 7] action sequenceaction, association, application, alert, identifier, order, assessment[FEATURE ID: 7] indication
[FEATURE ID: 8] policy databasefirewall, network, server, computer[FEATURE ID: 8] local network
[TRANSITIVE ID: 9] identifiedreceived, calculated, designated, indicated, determined[TRANSITIVE ID: 9] such
[FEATURE ID: 10] sequencestream, subset, plurality, signature, portion, value, set[FEATURE ID: 10] representation, hash
[FEATURE ID: 11] claimitem, aspect, embodiment, example, clair, requirement, step[FEATURE ID: 11] claim
[FEATURE ID: 12] explicit inputmessage, query, response, expression, data, information, communication[FEATURE ID: 12] test, request
[FEATURE ID: 13] behalfone, each, part, the[FEATURE ID: 13] N bits
[FEATURE ID: 14] consequenceresult, component, portion, source[FEATURE ID: 14] first part
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 5]

, comprising [TRANSITIVE ID: 6]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 1]

, a plurality of rules [FEATURE ID: 3]

, wherein each rule identifies an action sequence [FEATURE ID: 7]

; storing , in a policy database [FEATURE ID: 8]

, a plurality of assessment policies [FEATURE ID: 3]

, wherein each assessment policy includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 6]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified [TRANSITIVE ID: 9]

runtime risk indicates a risk or threat of the identified action sequence of the application ; and identifying , by a runtime monitor [FEATURE ID: 5]

including a processing device [FEATURE ID: 5]

, a behavior score for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence [FEATURE ID: 10]

of at least two performed actions , and each performed action is at least one of : a user action , an application action , and a system action . 2 . The method of claim [FEATURE ID: 11]

1 , wherein the user action is any form of explicit input [FEATURE ID: 12]

from a user [FEATURE ID: 5]

of a computing system [FEATURE ID: 5]

. 3 . The method of claim 1 , wherein the application action is any activity performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution of a computing system . 4 . The method of claim 1 , wherein the system action is any operation performed by a computing system on behalf [FEATURE ID: 13]

of , or as a consequence [FEATURE ID: 14]

1 In an environment [FEATURE ID: 4]

comprising [TRANSITIVE ID: 6]

at least a first device adapted to be communicatively connected to a routing device over a first network and a second device [FEATURE ID: 5]

adapted to be communicatively connected to the routing device over a second network [FEATURE ID: 5]

, the routing device adapted to be communicatively connected to an antivirus service [FEATURE ID: 4]

, a method [FEATURE ID: 1]

for determining [TRANSITIVE ID: 2]

whether web content intended for transmission between the first device and the second device via the routing device may comprise malware [FEATURE ID: 3]

, the method comprising : receiving [TRANSITIVE ID: 2]

, at the routing device , at least a part of web content from the second device ; providing , by the routing device , to the antivirus service , at least a representation [FEATURE ID: 10]

of N bits [FEATURE ID: 13]

of the received part of the web content ; and receiving , at the routing device , from the antivirus service , test information indicating [TRANSITIVE ID: 2]

whether the web content may comprise malware , wherein the test information is based on the representation of the N bits provided by the routing device . 2 The method according to claim [FEATURE ID: 11]

1 , wherein : when the test information indicates that the web content does not comprise malware , the method further comprises the routing device transmitting the web content to the first device , and when the test information indicates that the web content may comprise malware , the method further comprises blocking transmission of the web content to the first device . 3 The method according to one or more of preceding claims , wherein the representation of the N bits comprises a representation or the first N bits of the received part of the web content and / or wherein the representation or the N bits comprises a hash [FEATURE ID: 10]

of the N bits . 4 The method according to one or more of the preceding claims , further comprising buffering the N bits of the received part of the web content at the routing device . 5 The method according to one or more of preceding claims , wherein the representation of the N bits is provided to the antivirus service and / or the test [FEATURE ID: 12]

is received from the antivirus service using User Datagram Protocol . 6 The method according to one or more of the preceding claims , wherein the representation of the N bits is provided to the antivirus service and / or the test information is received from the antivirus service encrypted , authenticated , or both encrypted and authenticated . 7 The method according to one or more of the preceding claims , wherein the routing device is configured to support HTTP - pipelining and connection pre-allocation . 8 The method according to one or more of the preceding claims , the method further comprising , prior to receiving the at least a part of the web content from the second device : receiving , at the routing device , from the first device , a request [FEATURE ID: 12]

for access to the web content provided by the second device , and re-directing , by the routing device , the request to the second device , wherein the routing device receives the at least a part of the web content from the second device in response to the routing device re-directing the request to the second device . 9 The method according to claim 8 , further comprising providing , by the routing device , to the antivirus service , a first part [FEATURE ID: 14]

of the request , such [FEATURE ID: 9]

as e.g. a hostname and / or a Uniform Resource Identifier associated with the web content , wherein the test information is further based on the first part of the request provided by the routing device . 10 The method according to claim 9 , wherein the test information is established by receiving the web content at the antivirus service and checking the web content against one or more antivirus packages [FEATURE ID: 3]

. 11 The method according to one or more of claims 8 - 10 , further comprising storing , at the routing device , at least a part of the request and at least a part of the test information associated with the request . 12 The method according to one or more of the preceding claims , wherein the first network comprises a local network [FEATURE ID: 8]

, the second network comprises an external network [FEATURE ID: 4]

, the second device comprises a web server [FEATURE ID: 5]

, and the first device comprises a device [FEATURE ID: 1]

within the local network capable of receiving the web content from the web server and wherein , optionally , when the test information indicates that the web content may comprise malware , the method further comprises providing an indication [FEATURE ID: 7]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8250654B1
Filed: 2005-01-27
Issued: 2012-08-21
Patent Holder: (Original Assignee) Science Applications International Corp SAIC     (Current Assignee) Leidos Inc
Inventor(s): Scott Cruickshanks Kennedy, II Carleton Royse Ayers, Javier Godinez, Susan Fichera Banks, Myoki Elizabeth Spencer

Title: Systems and methods for implementing and scoring computer network defense exercises

[FEATURE ID: 1] method, application program, application, runtime monitor, processing device, claim, user, computing system, activity, task, execution, operation, statesystem, program, network, device, protocol, procedure, processor[FEATURE ID: 1] process, client system defense training exercise, implemented, server architecture, computer, first server, client system CPU
[TRANSITIVE ID: 2] assessing, storing, identifyingestablishing, providing, verifying, defining, receiving, recognizing, monitoring[TRANSITIVE ID: 2] facilitating, determining, registering, tracking
[FEATURE ID: 3] runtime riskvulnerability, threats, information, attacks, policies, security, events[FEATURE ID: 3] vulnerabilities, current vulnerabilities message
[FEATURE ID: 4] devicesystem, network, user, client, display, server, computer system[FEATURE ID: 4] client system, client computer, hard disk
[TRANSITIVE ID: 5] comprising, usingthrough, by, with, includes, containing, comprises, for[TRANSITIVE ID: 5] comprising, including
[FEATURE ID: 6] rules databasecomputer, server, message, repository, memory, device, table[FEATURE ID: 6] client identity, database
[FEATURE ID: 7] policy databasedatabase, network, server, cache, storage, computer, system[FEATURE ID: 7] client, memory
[FEATURE ID: 8] assessment policiesconditions, diagnostics, instructions, metrics, reports[FEATURE ID: 8] information
[TRANSITIVE ID: 9] identifieddesignated, given, particular, assigned[TRANSITIVE ID: 9] associated
[FEATURE ID: 10] sequencecollection, series, subset, plurality[FEATURE ID: 10] list
[FEATURE ID: 11] explicit inputsignal, response, notification, communication, report, request, challenge[FEATURE ID: 11] protocol version identification message, registration request message, system identification message, successful registration, message, messages, list vulnerabilities message
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 1]

that executes on a device [FEATURE ID: 4]

, comprising [TRANSITIVE ID: 5]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 6]

, a plurality of rules , wherein each rule identifies an action sequence ; storing , in a policy database [FEATURE ID: 7]

, a plurality of assessment policies [FEATURE ID: 8]

, wherein each assessment policy includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 5]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified [TRANSITIVE ID: 9]

runtime risk indicates a risk or threat of the identified action sequence of the application [FEATURE ID: 1]

; and identifying , by a runtime monitor [FEATURE ID: 1]

including a processing device [FEATURE ID: 1]

, a behavior score for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence [FEATURE ID: 10]

of at least two performed actions , and each performed action is at least one of : a user action , an application action , and a system action . 2 . The method of claim [FEATURE ID: 1]

1 , wherein the user action is any form of explicit input [FEATURE ID: 11]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 1]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 1]

performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution [FEATURE ID: 1]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 1]

performed by a computing system on behalf of , or as a consequence of , a user action or application action that changes the state [FEATURE ID: 1]

1 . A process [FEATURE ID: 1]

for facilitating [TRANSITIVE ID: 2]

a client system defense training exercise [FEATURE ID: 1]

implemented [TRANSITIVE ID: 1]

over a client [FEATURE ID: 7]

- server architecture [FEATURE ID: 1]

comprising [TRANSITIVE ID: 5]

: sending a protocol version identification message [FEATURE ID: 11]

by a client system [FEATURE ID: 4]

including [TRANSITIVE ID: 5]

at least one computer [FEATURE ID: 1]

to a first server [FEATURE ID: 1]

for determining [TRANSITIVE ID: 2]

a the protocol version common to both the client computer [FEATURE ID: 4]

and the first server ; sending a registration request message [FEATURE ID: 11]

by the client system to a first server for registering [TRANSITIVE ID: 2]

the client computer with the first server ; sending a system identification message [FEATURE ID: 11]

by the client system to a first server for tracking [TRANSITIVE ID: 2]

the client identity [FEATURE ID: 6]

; sending a profile message by the first server to the client system in response to successful registration [FEATURE ID: 11]

by the client system , the profile message including a list [FEATURE ID: 10]

of vulnerabilities [FEATURE ID: 3]

with associated [TRANSITIVE ID: 9]

vulnerability identifiers ( IDs ) that the client is to monitor ; sending a health message by the client system to the first server at predetermined intervals , the health message including information [FEATURE ID: 8]

regarding at least one of client system CPU [FEATURE ID: 1]

, memory [FEATURE ID: 7]

, hard disk [FEATURE ID: 4]

, network and interfaces ; sending a vulnerability fixed message [FEATURE ID: 11]

by the client system to the first server each time one of the vulnerabilities on the list of vulnerabilities has been fixed , the vulnerability fixed messages [FEATURE ID: 11]

including the associated vulnerability ID for each fixed vulnerability ; sending a list vulnerabilities message [FEATURE ID: 11]

by the first server to the client system , requesting a listing of all current client system vulnerabilities by associated vulnerability ID ; sending a list of current vulnerabilities message [FEATURE ID: 3]

by the client system to the first server in response to the list vulnerabilities message from the first server ; storing details from the profile message , one or more health messages , one or more vulnerability fixed messages and one or more list of current vulnerabilities messages in at least one database [FEATURE ID: 6]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8219582B2
Filed: 2008-04-25
Issued: 2012-07-10
Patent Holder: (Original Assignee) International Business Machines Corp     (Current Assignee) Kyndryl Inc
Inventor(s): Danny Yen-Fu Chen, David A. Cox, Sheryl S. Kinstler, Fabian F. Morgan

Title: System, method, and computer readable media for identifying a user-initiated log file record in a log file

[FEATURE ID: 1] method, policy database, application, runtime monitor, processing device, computing system, task, operationprogram, device, system, server, processor, machine, network[FEATURE ID: 1] method, user, log file, computer, software program, first software command, second software command, software application, computer system
[TRANSITIVE ID: 2] assessing, storing, identifying, usingdetermining, providing, defining, receiving, recording, including, displaying[TRANSITIVE ID: 2] identifying, selecting, having, executing, indicating, computer stores
[FEATURE ID: 3] runtime risk, rulesinstructions, policies, commands, functionality, controls, routines, operations[FEATURE ID: 3] program instructions
[FEATURE ID: 4] application program, rule, application action, form, activityevent, operation, application, execution, item, request, action[FEATURE ID: 4] log file record
[FEATURE ID: 5] devicedisplay, client, controller, processor, network, workstation, terminal[FEATURE ID: 5] display device
[TRANSITIVE ID: 6] comprisingincluding, includes, containing, involving, by, having, providing[TRANSITIVE ID: 6] comprising
[FEATURE ID: 7] rules databasedatabase, controller, memory, device, buffer, register, computer[FEATURE ID: 7] memory device
[FEATURE ID: 8] action sequence, consequenceinteraction, behavior, function, activity, task, response, process[FEATURE ID: 8] software command
[FEATURE ID: 9] assessment policy, sequencerule, set, string, series, list, plurality, profile[FEATURE ID: 9] pattern
[TRANSITIVE ID: 10] identifieddefined, executed, stored, detected, created, triggered, made[TRANSITIVE ID: 10] initiated, generated
[FEATURE ID: 11] actioncommand, event, function, instruction, operation, item[FEATURE ID: 11] action button, task
[FEATURE ID: 12] claimembodiment, aspect, item, paragraph, claim of, clause, in claim[FEATURE ID: 12] claim
[FEATURE ID: 13] explicit input, behalf, stateprocessing, performance, output, activation, initiation, one, selection[FEATURE ID: 13] execution, generation
[FEATURE ID: 14] user, second actionone, use, customer, third, first, second, processor[FEATURE ID: 14] first user
[FEATURE ID: 15] executionoperation, software, memory, the, hardware, time[FEATURE ID: 15] order
[FEATURE ID: 16] first timesecond, times, predetermined time, current time, first, specified time, subsequent time[FEATURE ID: 16] first time, second time, third time
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 5]

, comprising [TRANSITIVE ID: 6]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 7]

, a plurality of rules [FEATURE ID: 3]

, wherein each rule [FEATURE ID: 4]

identifies an action sequence [FEATURE ID: 8]

; storing , in a policy database [FEATURE ID: 1]

, a plurality of assessment policies , wherein each assessment policy [FEATURE ID: 9]

includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 2]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified [TRANSITIVE ID: 10]

runtime risk indicates a risk or threat of the identified action sequence of the application [FEATURE ID: 1]

; and identifying , by a runtime monitor [FEATURE ID: 1]

including a processing device [FEATURE ID: 1]

, a behavior score for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence [FEATURE ID: 9]

of at least two performed actions , and each performed action [FEATURE ID: 11]

is at least one of : a user action , an application action [FEATURE ID: 4]

, and a system action . 2 . The method of claim [FEATURE ID: 12]

1 , wherein the user action is any form [FEATURE ID: 4]

of explicit input [FEATURE ID: 13]

from a user [FEATURE ID: 14]

of a computing system [FEATURE ID: 1]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 4]

performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution [FEATURE ID: 15]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 1]

performed by a computing system on behalf [FEATURE ID: 13]

of , or as a consequence [FEATURE ID: 8]

of , a user action or application action that changes the state [FEATURE ID: 13]

of the computing system . 5 . The method of claim 1 , wherein a first action of the at least two performed actions is performed at a first time [FEATURE ID: 16]

, and the second action [FEATURE ID: 14]

1 . A method [FEATURE ID: 1]

for identifying [TRANSITIVE ID: 2]

a user [FEATURE ID: 1]

- initiated [TRANSITIVE ID: 10]

log file record [FEATURE ID: 4]

in a log file [FEATURE ID: 1]

, the method comprising [TRANSITIVE ID: 6]

: a computer [FEATURE ID: 1]

selecting [TRANSITIVE ID: 2]

the log file having [TRANSITIVE ID: 2]

a plurality of log file records therein , the plurality of log file records having at least in part a repeating pattern [FEATURE ID: 9]

of log file records automatically generated [TRANSITIVE ID: 10]

by a software program [FEATURE ID: 1]

; the computer executing [TRANSITIVE ID: 2]

a first software command [FEATURE ID: 1]

to store a first timestamp value in a memory device [FEATURE ID: 7]

, the first timestamp value indicating [TRANSITIVE ID: 2]

a first time [FEATURE ID: 16]

; the computer executing at least a first user [FEATURE ID: 14]

- initiated software command [FEATURE ID: 8]

by a user that induces the computer to store at least a user - initiated log file record having a second timestamp value in the log file and to store the second timestamp value , the second timestamp value indicating a second time [FEATURE ID: 16]

after the first time ; wherein the computer stores [FEATURE ID: 2]

the at least one user - initiated log file record responsive to the user selecting a perform action button [FEATURE ID: 11]

, the perform action button being configured to cause execution [FEATURE ID: 13]

of the first user - initiated software command and to generate the at least one user - initiated log file record for storage in the log file ; the computer executing a second software command [FEATURE ID: 1]

to store a third timestamp value in the memory device , the third timestamp value indicating a third time [FEATURE ID: 16]

, the third time being after the second time and after execution of the first user - initiated software command is completed ; the computer analyzing the plurality of log file records in the log file to identify the repeating pattern of log file records stored therein , the repeating pattern of log file records being generated by a software application [FEATURE ID: 1]

at predetermined time intervals ; the computer distinguishing between the user - initiated log file record generated by the user selecting the perform action button and the repeating pattern of log file records generated by the software application at the predetermined time intervals ; the computer displaying the user - initiated log file record in a graphical user interface that is devoid of the repeating pattern of log file records generated by the software application at the predetermined time intervals ; and the computer storing the user - initiated log file record in the memory device ; wherein distinguishing between the user - initiated log file record generated by the user selecting the perform action button and the repeating pattern of log file records generated by the software application at the predetermined time intervals comprises the computer removing the repeating pattern of log file records from the plurality of log file records in order [FEATURE ID: 15]

to identify the user - initiated log file record ; and wherein the user selecting the perform action button causes generation [FEATURE ID: 13]

of the at least one user - initiated log file record to correspond to at least one of performing migrate task [FEATURE ID: 11]

, performing an assign task , and performing a submit task . 2 . The method of claim [FEATURE ID: 12]

1 , further comprising the computer displaying the plurality of log file records in a graphical user interface on a display device [FEATURE ID: 5]

. 3 . A computer system [FEATURE ID: 1]

for identifying a user - initiated log file record in a log file , the computer system comprising : one or more processors , one or more computer - readable tangible storage devices , and one or more computer - readable memories , at least one of the one or more computer - readable memories having the log file stored therein , the log file having a plurality of log file records therein , the plurality of log file records having at least in part a repeating pattern of log file records automatically generated by a software program ; program instructions [FEATURE ID: 3]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8214364B2
Filed: 2008-05-21
Issued: 2012-07-03
Patent Holder: (Original Assignee) International Business Machines Corp     (Current Assignee) Daedalus Blue LLC
Inventor(s): Joseph P. Bigus, Leon Gong, Christoph Lingenfelder

Title: Modeling user access to computer resources

[FEATURE ID: 1] method, application, runtime monitor, processing device, user, computing system, task, executiondevice, system, process, program, logic, platform, network[FEATURE ID: 1] computer, method
[TRANSITIVE ID: 2] assessing, storing, identifying, usingdefining, determining, monitoring, processing, establishing, analyzing, creating[TRANSITIVE ID: 2] collecting, documenting, accessing, aggregating, generating, running
[FEATURE ID: 3] runtime risk, explicit inputinformation, data, user, operations, activity, behavior, access[FEATURE ID: 3] user access, user actions, user behavior, actions
[FEATURE ID: 4] application programactivity, instance, account, operator, agent, entity[FEATURE ID: 4] user
[FEATURE ID: 5] deviceresource, devices, computing, hardware, system[FEATURE ID: 5] computer resources
[TRANSITIVE ID: 6] comprisingincluding, performing, includes, containing, comprises, involving, by[TRANSITIVE ID: 6] comprising
[FEATURE ID: 7] rules database, rule, behavior scoredatabase, pattern, template, profile, classifier, policy, classification[FEATURE ID: 7] model, cluster, classification model, scoring rule
[FEATURE ID: 8] action sequenceassociation, algorithm, identifier, anomaly[FEATURE ID: 8] association rule model
[FEATURE ID: 9] policy databaseblacklist, template, classifier, database[FEATURE ID: 9] clustering model
[FEATURE ID: 10] leastlest, minus, east, last, lease, most, least any[FEATURE ID: 10] least
[TRANSITIVE ID: 11] identifieddefined, particular, associated, designated, predetermined, given, different[TRANSITIVE ID: 11] selected, distinct, respective
[FEATURE ID: 12] sequenceseries, plurality, stream, subset, collection, list, sets[FEATURE ID: 12] first set, set
[FEATURE ID: 13] actionsactivities, transactions, elements, events, items[FEATURE ID: 13] attributes
[FEATURE ID: 14] activity, operation, second actionexecution, actions, application, processing, one, behavior, task[FEATURE ID: 14] operation
[FEATURE ID: 15] stateproperties, content, parameters, characteristics[FEATURE ID: 15] data
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 5]

, comprising [TRANSITIVE ID: 6]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 7]

, a plurality of rules , wherein each rule [FEATURE ID: 7]

identifies an action sequence [FEATURE ID: 8]

; storing , in a policy database [FEATURE ID: 9]

, a plurality of assessment policies , wherein each assessment policy includes at least [FEATURE ID: 10]

one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 2]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified [TRANSITIVE ID: 11]

runtime risk indicates a risk or threat of the identified action sequence of the application [FEATURE ID: 1]

; and identifying , by a runtime monitor [FEATURE ID: 1]

including a processing device [FEATURE ID: 1]

, a behavior score [FEATURE ID: 7]

for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence [FEATURE ID: 12]

of at least two performed actions [FEATURE ID: 13]

, and each performed action is at least one of : a user action , an application action , and a system action . 2 . The method of claim 1 , wherein the user action is any form of explicit input [FEATURE ID: 3]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 1]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 14]

performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution [FEATURE ID: 1]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 14]

performed by a computing system on behalf of , or as a consequence of , a user action or application action that changes the state [FEATURE ID: 15]

of the computing system . 5 . The method of claim 1 , wherein a first action of the at least two performed actions is performed at a first time , and the second action [FEATURE ID: 14]

1 . A computer [FEATURE ID: 1]

- implemented method [FEATURE ID: 1]

to model user access [FEATURE ID: 3]

to computer resources [FEATURE ID: 5]

, the method comprising [TRANSITIVE ID: 6]

: collecting [TRANSITIVE ID: 2]

a first set [FEATURE ID: 12]

of log records documenting [TRANSITIVE ID: 2]

user actions [FEATURE ID: 3]

in accessing [TRANSITIVE ID: 2]

the computer resources during a first time interval ; aggregating [TRANSITIVE ID: 2]

the first set of log records at one or more chronological levels ; generating [TRANSITIVE ID: 2]

, by operation [FEATURE ID: 14]

of one or more computer processors , a model [FEATURE ID: 7]

of user behavior [FEATURE ID: 3]

by running [TRANSITIVE ID: 2]

one or more selected [TRANSITIVE ID: 11]

model types using data [FEATURE ID: 15]

associated with one or more attributes [FEATURE ID: 13]

selected from the first set of log records , wherein the data is aggregated into one or more mining tables according to the one or more chronological levels , and further using at least one algorithm parameter selected for the one or more model types , wherein the generated model includes a plurality of clusters of the selected one or more model types , wherein each cluster [FEATURE ID: 7]

is associated with a distinct [FEATURE ID: 11]

, respective [FEATURE ID: 11]

authorized user role that is authorized to access the computer resources , wherein each cluster characterizes a distinct , legitimate pattern with which any user [FEATURE ID: 4]

of the respective authorized user role is expected to access the computer resources , wherein the generated model comprises at least [FEATURE ID: 10]

one of a classification model [FEATURE ID: 7]

, a clustering model [FEATURE ID: 9]

, and an association rule model [FEATURE ID: 8]

, wherein the clustering model comprises at least one of a distribution - based clustering model and a center - based clustering model ; and scoring , based on the generated model and at least one scoring rule [FEATURE ID: 7]

, a set [FEATURE ID: 12]

of user actions to determine whether the set of actions [FEATURE ID: 3]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8201255B1
Filed: 2009-06-30
Issued: 2012-06-12
Patent Holder: (Original Assignee) Symantec Corp     (Current Assignee) CA Inc
Inventor(s): Carey Nachenberg

Title: Hygiene-based discovery of exploited portals

[FEATURE ID: 1] method, device, policy database, application, processing device, user, computing system, task, statesystem, processor, network, server, program, controller, machine[FEATURE ID: 1] computer, method, client, security policy, registry key
[TRANSITIVE ID: 2] assessing, storing, identifying, usingdefining, generating, monitoring, providing, evaluating, analyzing, establishing[TRANSITIVE ID: 2] determination, receiving, regarding, determining, reviewing, computing, correlation, notification
[FEATURE ID: 3] runtime risk, executes, actionsreputation, data, trustworthiness, performance, behavior, functionality, security[FEATURE ID: 3] hygiene, legitimacy, activities, hygiene scores, malware infection rate, poor reputation, private user information
[FEATURE ID: 4] application program, action sequence, application action, form, explicit input, activity, operation, second actionevent, execution, agent, action, actions, behavior, item[FEATURE ID: 4] application, activity, illegitimate activity, multiple clients, executable file
[TRANSITIVE ID: 5] comprisingincludes, comprises, having, including, and, containing, involves[TRANSITIVE ID: 5] comprising, has
[FEATURE ID: 6] rules databaserepository, buffer, memory, system, cache, processor, database[FEATURE ID: 6] process
[FEATURE ID: 7] risk, threatlikelihood, vulnerability, misuse, suspicion, cause, hazard, probability[FEATURE ID: 7] propensities
[FEATURE ID: 8] runtime monitorcomputer, computing, hardware, client[FEATURE ID: 8] clients
[FEATURE ID: 9] behavior scoremetric, profile, reputation, risk, probability, value, signature[FEATURE ID: 9] score distribution, reputation score
[FEATURE ID: 10] sequencecollection, subset, grouping, plurality[FEATURE ID: 10] group
[FEATURE ID: 11] actionmovement, function, operation, task, step, behavior[FEATURE ID: 11] same activity
[FEATURE ID: 12] claimfigure, step, embodiment, item, paragraph, clam, feature[FEATURE ID: 12] claim
[FEATURE ID: 13] behalfinitiation, performance, completion, part, detection[FEATURE ID: 13] future performance
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes [TRANSITIVE ID: 3]

on a device [FEATURE ID: 1]

, comprising [TRANSITIVE ID: 5]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 6]

, a plurality of rules , wherein each rule identifies an action sequence [FEATURE ID: 4]

; storing , in a policy database [FEATURE ID: 1]

, a plurality of assessment policies , wherein each assessment policy includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 2]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified runtime risk indicates a risk [FEATURE ID: 7]

or threat [FEATURE ID: 7]

of the identified action sequence of the application [FEATURE ID: 1]

; and identifying , by a runtime monitor [FEATURE ID: 8]

including a processing device [FEATURE ID: 1]

, a behavior score [FEATURE ID: 9]

for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence [FEATURE ID: 10]

of at least two performed actions [FEATURE ID: 3]

, and each performed action [FEATURE ID: 11]

is at least one of : a user action , an application action [FEATURE ID: 4]

, and a system action . 2 . The method of claim [FEATURE ID: 12]

1 , wherein the user action is any form [FEATURE ID: 4]

of explicit input [FEATURE ID: 4]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 1]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 4]

performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 4]

performed by a computing system on behalf [FEATURE ID: 13]

of , or as a consequence of , a user action or application action that changes the state [FEATURE ID: 1]

of the computing system . 5 . The method of claim 1 , wherein a first action of the at least two performed actions is performed at a first time , and the second action [FEATURE ID: 4]

1 . A computer [FEATURE ID: 1]

- implemented method [FEATURE ID: 1]

of hygiene [FEATURE ID: 3]

- based determination [FEATURE ID: 2]

of legitimacy [FEATURE ID: 3]

of activities [FEATURE ID: 3]

performed by applications on clients [FEATURE ID: 8]

, the method comprising [TRANSITIVE ID: 5]

: receiving [TRANSITIVE ID: 2]

, from a client [FEATURE ID: 1]

, information regarding [TRANSITIVE ID: 2]

an application [FEATURE ID: 4]

that is performing an activity [FEATURE ID: 4]

on the client ; determining [TRANSITIVE ID: 2]

a score distribution [FEATURE ID: 9]

for hygiene scores [FEATURE ID: 3]

of other clients on which a same type of application has [TRANSITIVE ID: 5]

performed a same activity [FEATURE ID: 11]

, the determining comprising : reviewing [TRANSITIVE ID: 2]

the hygiene scores of the other clients on which the same type of application has performed the same activity , the hygiene scores representing the other clients ' propensities [FEATURE ID: 7]

for being infected by malware ; and determining , using the reviewed hygiene scores , a malware infection rate [FEATURE ID: 3]

of the other clients relative to a malware infection rate of all clients , wherein the malware infection rate of the other clients being greater than the malware infection rate of all clients indicates that the activity performed by the application is likely malicious ; correlating the activity being performed by the application on the client with the score distribution for hygiene scores of the other clients on which the same type of application has performed the same activity ; computing [FEATURE ID: 2]

, based on the correlation [FEATURE ID: 2]

, a reputation score [FEATURE ID: 9]

for the activity with respect to the application performing the activity ; and identifying , based on the reputation score , whether the activity is an illegitimate activity [FEATURE ID: 4]

for the application . 2 . The method of claim [FEATURE ID: 12]

1 , wherein identifying further comprises identifying the activity to be an illegitimate activity for the application based the reputation score indicating a poor reputation [FEATURE ID: 3]

. 3 . The method of claim 2 , further comprising implementing a security policy [FEATURE ID: 1]

for multiple clients [FEATURE ID: 4]

, the security policy requiring blocking of applications of the same type from future performance [FEATURE ID: 13]

of the illegitimate activity on those clients . 4 . The method of claim 2 , further comprising notifying the client that the activity has been identified as an illegitimate activity for the client , the notification [FEATURE ID: 2]

indicating to the client to block the application from performing the illegitimate activity and to block future performance of the illegitimate activity by the application . 5 . The method of claim 1 , wherein the activity is selected from a group [FEATURE ID: 10]

consisting of : introducing an executable file [FEATURE ID: 4]

on the client , injecting code into a process [FEATURE ID: 6]

on the client , modifying a registry key [FEATURE ID: 1]

on the client , and accessing private user information [FEATURE ID: 3]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8191147B1
Filed: 2008-04-24
Issued: 2012-05-29
Patent Holder: (Original Assignee) Symantec Corp     (Current Assignee) CA Inc
Inventor(s): Patrick Gardner, Shane Pereira

Title: Method for malware removal based on network signatures and file system artifacts

[FEATURE ID: 1] method, application, user, operationcomputer, device, program, processor, procedure, client, machine[FEATURE ID: 1] method, host computer system, network, malicious code, process, malicious detection, system, computer system, memory
[TRANSITIVE ID: 2] assessing, storingidentifying, monitoring, analyzing, establishing, verifying, examining, processing[TRANSITIVE ID: 2] detecting, determining, determination, mapping, blocking
[FEATURE ID: 3] runtime risk, rules, assessment policiesconditions, policies, properties, parameters, information, actions, files[FEATURE ID: 3] data, residual artifacts
[FEATURE ID: 4] application program, application actionapplication, event, activity, object, action, operation, agent[FEATURE ID: 4] entry, heuristic
[FEATURE ID: 5] device, processing device, computing system, tasknetwork, user, system, display, platform, server, machine[FEATURE ID: 5] computer, malicious network signature present
[TRANSITIVE ID: 6] comprising, usingincluding, with, by, for, providing, of, through[TRANSITIVE ID: 6] comprising
[FEATURE ID: 7] rules database, execution, consequence, statedatabase, process, function, system, memory, profile, resource[FEATURE ID: 7] residual artifact, malicious network signature database, file
[FEATURE ID: 8] rulecondition, request, definition, key[FEATURE ID: 8] requirement
[FEATURE ID: 9] action sequence, formaction, activity, event, element, item, sequence, application[FEATURE ID: 9] examination
[FEATURE ID: 10] policy databasefirewall, network, file, computer[FEATURE ID: 10] network communication
[TRANSITIVE ID: 11] identifyingexamining, determining, monitoring, checking, identification[TRANSITIVE ID: 11] detection
[TRANSITIVE ID: 12] identifieddetermined, executed, generated, detected[TRANSITIVE ID: 12] present
[FEATURE ID: 13] threatfailure, cause, consequence, performance[FEATURE ID: 13] result
[FEATURE ID: 14] runtime monitornetwork, method, module, process[FEATURE ID: 14] registry key
[FEATURE ID: 15] behavior scorepolicy, remedy, warning, reputation, solution[FEATURE ID: 15] notification
[FEATURE ID: 16] actions, explicit inputcommands, transactions, events, access, activity, information, data[FEATURE ID: 16] inbound data packets
[FEATURE ID: 17] action, second actionorder, operation, result, execution, actions, activity, task[FEATURE ID: 17] act
[FEATURE ID: 18] claimclam, step, paragraph, figure, the claim, clause, statement[FEATURE ID: 18] claim
[FEATURE ID: 19] activityrequest, change, manipulation, command[FEATURE ID: 19] modification
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 5]

, comprising [TRANSITIVE ID: 6]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 7]

, a plurality of rules [FEATURE ID: 3]

, wherein each rule [FEATURE ID: 8]

identifies an action sequence [FEATURE ID: 9]

; storing , in a policy database [FEATURE ID: 10]

, a plurality of assessment policies [FEATURE ID: 3]

, wherein each assessment policy includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 11]

, using [TRANSITIVE ID: 6]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified [TRANSITIVE ID: 12]

runtime risk indicates a risk or threat [FEATURE ID: 13]

of the identified action sequence of the application [FEATURE ID: 1]

; and identifying , by a runtime monitor [FEATURE ID: 14]

including a processing device [FEATURE ID: 5]

, a behavior score [FEATURE ID: 15]

for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence of at least two performed actions [FEATURE ID: 16]

, and each performed action [FEATURE ID: 17]

is at least one of : a user action , an application action [FEATURE ID: 4]

, and a system action . 2 . The method of claim [FEATURE ID: 18]

1 , wherein the user action is any form [FEATURE ID: 9]

of explicit input [FEATURE ID: 16]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 5]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 19]

performed by the application initiated programmatically by a task [FEATURE ID: 5]

in execution [FEATURE ID: 7]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 1]

performed by a computing system on behalf of , or as a consequence [FEATURE ID: 7]

of , a user action or application action that changes the state [FEATURE ID: 7]

of the computing system . 5 . The method of claim 1 , wherein a first action of the at least two performed actions is performed at a first time , and the second action [FEATURE ID: 17]

1 . A computer [FEATURE ID: 5]

- implemented method [FEATURE ID: 1]

comprising [TRANSITIVE ID: 6]

: detecting [TRANSITIVE ID: 2]

a malicious network signature on a host computer system [FEATURE ID: 1]

, the detection [FEATURE ID: 11]

being accomplished through an examination [FEATURE ID: 9]

of inbound data packets [FEATURE ID: 16]

from a network [FEATURE ID: 1]

coupled to the host computer system , said malicious network signature being associated with a malicious code [FEATURE ID: 1]

; determining [TRANSITIVE ID: 2]

whether or not said malicious network signature is associated with malicious code , the determination [FEATURE ID: 2]

taking place through a process [FEATURE ID: 1]

comprising : identifying , by reviewing the malicious network signature , and data [FEATURE ID: 3]

associated with the malicious network signature , one or more residual artifacts [FEATURE ID: 3]

required to be present [FEATURE ID: 12]

within the host computer system , in order for a malicious detection [FEATURE ID: 1]

to be validated , wherein the system [FEATURE ID: 1]

is configured to determine a residual artifact [FEATURE ID: 7]

comprising a registry entry , the act [FEATURE ID: 17]

of determination being triggered based on whether the requirement [FEATURE ID: 8]

for the registry entry is specified within the malicious network signature ; determining that at least one of the one or more identified residual artifacts are present within the host computer system , thus validating the malicious detection ; and wherein upon a determination that said malicious network signature is validated , locating said malicious code on said host computer system , and removing said malicious code from said host computer system . 2 . The computer - implemented method of claim [FEATURE ID: 18]

1 further comprising : providing a notification [FEATURE ID: 15]

. 3 . The computer - implemented method of claim 1 wherein upon a determination that said malicious network signature is not validated , exiting said computer - implemented method . 4 . The computer - implemented method of claim 1 wherein said detecting a malicious network signature on a host computer system comprises : detecting a network communication [FEATURE ID: 10]

on said host computer system ; and mapping [FEATURE ID: 2]

said network communication to said malicious network signature present [FEATURE ID: 5]

as an entry [FEATURE ID: 4]

in a malicious network signature database [FEATURE ID: 7]

, said entry identifying said malicious code . 5 . The computer - implemented method of claim 1 wherein said locating said malicious code on said host computer system comprises : locating at least one of a file [FEATURE ID: 7]

associated with said malicious code , and a modification [FEATURE ID: 19]

made to said host computer system by said malicious code . 6 . The computer - implemented method of claim 1 wherein said removing said malicious code from said host computer system comprises : removing each of the following , if present in the host computer system : a file associated with said malicious code , a registry key [FEATURE ID: 14]

associated with said malicious code , and a modification made to said host computer system by said malicious code . 7 . The computer - implemented method of claim 1 wherein said validating whether or not said malicious network signature is associated with non-malicious code comprises : performing at least one validating heuristic [FEATURE ID: 4]

to determine whether or not said malicious network signature is associated with said non-malicious code , said performing generating at least one result [FEATURE ID: 13]

; and determining whether or not said malicious network signature is associated with said non-malicious code based on at least said at least one result . 8 . The computer implemented method of claim 4 wherein said entry identifies one or more residual artifacts associated with said malicious code ; and wherein said removing said malicious code further comprises : removing said one or more residual artifacts from said host computer system . 9 . The computer - implemented method of claim 4 further comprising blocking [FEATURE ID: 2]

said network communication . 10 . A computer system [FEATURE ID: 1]

comprising : a memory [FEATURE ID: 1]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8146146B1
Filed: 2005-12-23
Issued: 2012-03-27
Patent Holder: (Original Assignee) AT&T Intellectual Property II LP     (Current Assignee) AT&T Intellectual Property II LP ; AT&T Properties LLC ; Lyft Inc
Inventor(s): John L. Coviello, William A. O'Hern, Stephen G. Roderick, Michael R. Singer

Title: Method and apparatus for integrated network security alert information retrieval

[FEATURE ID: 1] method, comprising, policy database, application, runtime monitor, processing device, user, computing system, operationdevice, computer, system, client, firewall, program, policy[FEATURE ID: 1] method, network, comprising, security threat, destination network address, source network address, computing resource, security analyst, router
[TRANSITIVE ID: 2] assessing, storing, identifyingdetermining, defining, receiving, analyzing, monitoring, establishing, recognizing[TRANSITIVE ID: 2] use, detecting, generating
[FEATURE ID: 3] runtime risk, action sequence, assessment policies, threat, activity, execution, stateperformance, risk, action, vulnerability, functionality, operation, event[FEATURE ID: 3] behavior indicative, behavior, information
[FEATURE ID: 4] application program, application action, consequenceagent, application, appliance, entity, interface, object, activity[FEATURE ID: 4] organizational facility, organization, first computing resource
[FEATURE ID: 5] deviceuser, network, service, host, client, hardware[FEATURE ID: 5] media access control address, third party
[FEATURE ID: 6] rules databasedatabase, rule, policy, firewall, profile[FEATURE ID: 6] routing registry configuration
[FEATURE ID: 7] rulecriterion, request, program, key[FEATURE ID: 7] hyperlink
[FEATURE ID: 8] behavior scorecategory, description, location, profile, recommendation, classification, path[FEATURE ID: 8] physical location, course
[FEATURE ID: 9] actionorder, response, command, instruction[FEATURE ID: 9] first indication
[FEATURE ID: 10] claimclam, item, figure, embodiment, aspect, clause, the claim[FEATURE ID: 10] claim
[FEATURE ID: 11] formitem, aspect, pattern, combination, unit, element, kind[FEATURE ID: 11] type
[FEATURE ID: 12] explicit inputrequest, selection, notification, information, message, query, communication[FEATURE ID: 12] alert, search, scanning, first indication indicative, desire
[FEATURE ID: 13] taskperson, code, function, device[FEATURE ID: 13] user
[FEATURE ID: 14] behalfdetection, receipt, conclusion, performance, occurrence[FEATURE ID: 14] completion
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 5]

, comprising [TRANSITIVE ID: 1]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 6]

, a plurality of rules , wherein each rule [FEATURE ID: 7]

identifies an action sequence [FEATURE ID: 3]

; storing , in a policy database [FEATURE ID: 1]

, a plurality of assessment policies [FEATURE ID: 3]

, wherein each assessment policy includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified runtime risk indicates a risk or threat [FEATURE ID: 3]

of the identified action sequence of the application [FEATURE ID: 1]

; and identifying , by a runtime monitor [FEATURE ID: 1]

including a processing device [FEATURE ID: 1]

, a behavior score [FEATURE ID: 8]

for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence of at least two performed actions , and each performed action [FEATURE ID: 9]

is at least one of : a user action , an application action [FEATURE ID: 4]

, and a system action . 2 . The method of claim [FEATURE ID: 10]

1 , wherein the user action is any form [FEATURE ID: 11]

of explicit input [FEATURE ID: 12]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 1]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 3]

performed by the application initiated programmatically by a task [FEATURE ID: 13]

in execution [FEATURE ID: 3]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 1]

performed by a computing system on behalf [FEATURE ID: 14]

of , or as a consequence [FEATURE ID: 4]

of , a user action or application action that changes the state [FEATURE ID: 3]

1 . A method [FEATURE ID: 1]

for use [FEATURE ID: 2]

in a network [FEATURE ID: 1]

comprising [TRANSITIVE ID: 1]

: detecting [TRANSITIVE ID: 2]

behavior indicative [FEATURE ID: 3]

of a security threat [FEATURE ID: 1]

associated with a destination network address [FEATURE ID: 1]

, the behavior [FEATURE ID: 3]

also associated with a source network address [FEATURE ID: 1]

, the destination network address associated with a computing resource [FEATURE ID: 1]

; generating [TRANSITIVE ID: 2]

an alert [FEATURE ID: 12]

in response to detecting the behavior , the alert comprising a hyperlink [FEATURE ID: 7]

configured to initiate a search [FEATURE ID: 12]

of a first database and a second database ; initiating the search of the first database to obtain first information , the first information comprising information [FEATURE ID: 3]

relating to an organizational facility [FEATURE ID: 4]

associated with the computing resource , a user [FEATURE ID: 13]

assigned to the computing resource , and a physical location [FEATURE ID: 8]

of the computing resource ; initiating the search of the second database to obtain second information relating to the alert , the second information comprising a course [FEATURE ID: 8]

of action for a security analyst [FEATURE ID: 1]

for each type [FEATURE ID: 11]

of security threat and information related to a routing registry configuration [FEATURE ID: 6]

for the source network address , wherein the search of the second database is initiated upon completion [FEATURE ID: 14]

of the search of the first database ; scanning a the computing resource , the scanning [FEATURE ID: 12]

based on the first information , the scanning detecting a media access control address [FEATURE ID: 5]

of the computing resource and detecting information related to routers through which the computing resource is connected to a local network ; and displaying the first information and the second information and the media access control address of the computing resource and the information related to routers through which the computing resource is connected to the local network ; wherein the first database is maintained by an organization [FEATURE ID: 4]

operating the network and the second database is maintained by a third party [FEATURE ID: 5]

. 2 . The method of claim [FEATURE ID: 10]

1 further comprising : receiving a first indication indicative [FEATURE ID: 12]

of a desire [FEATURE ID: 12]

to obtain the first information relating to the alert , wherein initiating the search is performed in response to receiving the first indication [FEATURE ID: 9]

. 3 . The method of claim 1 wherein the first computing resource [FEATURE ID: 4]

is a router [FEATURE ID: 1]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US8091065B2
Filed: 2007-09-25
Issued: 2012-01-03
Patent Holder: (Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC
Inventor(s): Talhah Munawar Mir, Anil Kumar Venkata Revuru, Deepak J. Manohar, Vineet Batta

Title: Threat analysis and modeling during a software development lifecycle of a software application

[FEATURE ID: 1] method, policy database, runtime monitor, processing device, user, activity, task, operationsystem, computer, device, program, component, platform, procedure[FEATURE ID: 1] method, computing device, processor, memory, software development lifecycle, software application, software developer, threat model verification workflow
[TRANSITIVE ID: 2] assessing, identifyinganalyzing, defining, detecting, generating, monitoring, receiving, assigning[TRANSITIVE ID: 2] decomposing, identifying, determining
[FEATURE ID: 3] runtime risk, assessment policies, behavior scoreresources, operations, information, actions, policies, parameters, vulnerability[FEATURE ID: 3] instructions executable, external dependencies, data, attributes, risks, countermeasures, rules, functionality, security assessment
[FEATURE ID: 4] application program, action sequence, application actionapplication, algorithm, event, object, agent, executable, activity[FEATURE ID: 4] application task list, administrative client
[FEATURE ID: 5] deviceprogram, component, client, workflow, resource, network, template[FEATURE ID: 5] website, processing server, software design lifecycle
[TRANSITIVE ID: 6] comprising, storing, usingwith, by, through, to, having, providing, comprises[TRANSITIVE ID: 6] comprising, including
[FEATURE ID: 7] rules database, sequencepattern, profile, subset, representation, database, series, plurality[FEATURE ID: 7] collection
[FEATURE ID: 8] rulesguidelines, parameters, settings, protocols, laws, criteria, commands[FEATURE ID: 8] standards, policies
[TRANSITIVE ID: 9] identifiedapplied, recognized, determined, associated, assigned, generated, detected[TRANSITIVE ID: 9] corresponding, known
[FEATURE ID: 10] actions, executiontasks, elements, functions, applications, functionality, processes, modules[FEATURE ID: 10] components, code
[FEATURE ID: 11] claimclam, item, figure, embodiment, aspect, clause, statement[FEATURE ID: 11] claim
[FEATURE ID: 12] formkind, piece, source, level, unit, use[FEATURE ID: 12] type
[FEATURE ID: 13] explicit inputinput, notification, information, instruction, feedback, communication[FEATURE ID: 13] instructions
[FEATURE ID: 14] computing system, statesystem, processing, gui, display, resource, workload, execution[FEATURE ID: 14] testing
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 5]

, comprising [TRANSITIVE ID: 6]

: storing [TRANSITIVE ID: 6]

, in a rules database [FEATURE ID: 7]

, a plurality of rules [FEATURE ID: 8]

, wherein each rule identifies an action sequence [FEATURE ID: 4]

; storing , in a policy database [FEATURE ID: 1]

, a plurality of assessment policies [FEATURE ID: 3]

, wherein each assessment policy includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 6]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified [TRANSITIVE ID: 9]

runtime risk indicates a risk or threat of the identified action sequence of the application ; and identifying , by a runtime monitor [FEATURE ID: 1]

including a processing device [FEATURE ID: 1]

, a behavior score [FEATURE ID: 3]

for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence [FEATURE ID: 7]

of at least two performed actions [FEATURE ID: 10]

, and each performed action is at least one of : a user action , an application action [FEATURE ID: 4]

, and a system action . 2 . The method of claim [FEATURE ID: 11]

1 , wherein the user action is any form [FEATURE ID: 12]

of explicit input [FEATURE ID: 13]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 14]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 1]

performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution [FEATURE ID: 10]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 1]

performed by a computing system on behalf of , or as a consequence of , a user action or application action that changes the state [FEATURE ID: 14]

1 . A method [FEATURE ID: 1]

performed by a computing device [FEATURE ID: 1]

comprising [TRANSITIVE ID: 6]

a processor [FEATURE ID: 1]

and a memory [FEATURE ID: 1]

, the memory including [TRANSITIVE ID: 6]

instructions executable [FEATURE ID: 3]

by the processor , the method comprising : within a software development lifecycle [FEATURE ID: 1]

, decomposing [TRANSITIVE ID: 2]

a software application [FEATURE ID: 1]

into elements comprising components [FEATURE ID: 10]

, roles , external dependencies [FEATURE ID: 3]

, and data [FEATURE ID: 3]

; identifying [TRANSITIVE ID: 2]

attributes [FEATURE ID: 3]

corresponding [TRANSITIVE ID: 9]

to each of the elements ; identifying one or more threats to the software application based on the attributes ; identifying attacks to the software application from a common task list based on the one or more threats , the common task list including a collection [FEATURE ID: 7]

of previously known [TRANSITIVE ID: 9]

attacks ; determining [TRANSITIVE ID: 2]

risks [FEATURE ID: 3]

associated with the attacks to the software application ; generating an application task list [FEATURE ID: 4]

that includes countermeasures [FEATURE ID: 3]

to protect the software application from the risks ; developing code [FEATURE ID: 10]

of the software application based on the application task list ; and generating a visualization to enable a software developer [FEATURE ID: 1]

to implement the countermeasures during the software development lifecycle . 2 . The method of claim [FEATURE ID: 11]

1 , wherein the application task list includes rules [FEATURE ID: 3]

to enable the software application to comply with one or more standards [FEATURE ID: 8]

or policies [FEATURE ID: 8]

. 3 . The method of claim 1 , wherein the components comprise at least one of a website [FEATURE ID: 5]

, a processing server [FEATURE ID: 5]

, and an administrative client [FEATURE ID: 4]

. 4 . The method of claim 1 , wherein the external dependencies describe how to access the elements and how each of the elements interacts with other elements . 5 . The method of claim 1 , further comprising analyzing the one or more threats after identifying the one or more threats . 6 . The method of claim 1 wherein the visualization includes instructions [FEATURE ID: 13]

to the software developer to implement the countermeasures . 7 . The method of claim 1 , further comprising testing the software application to determine a functionality [FEATURE ID: 3]

of the software application and to determine a security assessment [FEATURE ID: 3]

of the software application against the attacks . 8 . The method of claim 7 , wherein the testing [FEATURE ID: 14]

includes a threat model verification workflow [FEATURE ID: 1]

to verify the security assessment of the software application against the threat model . 9 . A method performed by a computing device comprising a processor and a memory , the memory including instructions executable by the processor , the method comprising : defining a software application as part of a software design lifecycle [FEATURE ID: 5]

; determining attributes relating to the software application ; identifying rules to enable the software application to comply with one or more standards or policies , the rules identified based on one or more of : the attributes of the software application , a type [FEATURE ID: 12]








Targeted Patent:

Patent: US8850517B2
Filed: 2013-01-15
Issued: 2014-09-30
Patent Holder: (Original Assignee) TAASERA Inc     (Current Assignee) TAASERA Inc
Inventor(s): Srinivas Kumar

Title: Runtime risk detection based on user, application, and system action sequence correlation

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20110321175A1
Filed: 2010-06-23
Issued: 2011-12-29
Patent Holder: (Original Assignee) Salesforce com Inc     (Current Assignee) Salesforce com Inc
Inventor(s): Steve Slater

Title: Monitoring and reporting of data access behavior of authorized database users

[FEATURE ID: 1] method, device, policy database, application, runtime monitor, processing device, form, user, computing system, task, execution, operationprogram, process, network, processor, server, node, database[FEATURE ID: 1] computer, method, user, course, system, schedule, daily basis
[TRANSITIVE ID: 2] assessing, storing, identifyingdetermining, analyzing, detecting, managing, recognizing, tracking, measuring[TRANSITIVE ID: 2] monitoring, recording, comparing, monitoring behavior
[FEATURE ID: 3] runtime risk, risk, threat, explicit input, stateperformance, characteristics, information, data, activity, vulnerability, misuse[FEATURE ID: 3] user activity, historical data access behavior, collective historical data access behavior, ongoing data access behavior, unauthorized data access activity
[FEATURE ID: 4] application programapplication, entity, enterprise, organization, account[FEATURE ID: 4] multi-tenant CRM system
[TRANSITIVE ID: 5] comprisingincluding, includes, containing, comprises, involving, by, having[TRANSITIVE ID: 5] comprising
[FEATURE ID: 6] rules database, rule, behavior scorepolicy, profile, database, metric, template, computer, criterion[FEATURE ID: 6] database system, nominal event activity profile, scoring profile
[FEATURE ID: 7] action sequenceaction, item, alert, element, activity[FEATURE ID: 7] event
[FEATURE ID: 8] assessment policiesactions, metrics, patterns, indicators[FEATURE ID: 8] data access events
[TRANSITIVE ID: 9] usingof, via, to, against, from, for, with[TRANSITIVE ID: 9] accessing
[FEATURE ID: 10] sequencelist, collection, subset, plurality, sum[FEATURE ID: 10] set
[FEATURE ID: 11] actionsitems, transactions, effects, changes, times, ones, states[FEATURE ID: 11] events
[FEATURE ID: 12] action, activitybehavior, change, actions, movement, motion, operation, task[FEATURE ID: 12] action
[FEATURE ID: 13] claimclam, embodiment, paragraph, figure, aspect, the claim, feature[FEATURE ID: 13] claim
[FEATURE ID: 14] behalfdetection, the, conditions, each, performance, occurrence[FEATURE ID: 14] occurrences
1 . A method [FEATURE ID: 1]

for assessing [TRANSITIVE ID: 2]

runtime risk [FEATURE ID: 3]

for an application program [FEATURE ID: 4]

that executes on a device [FEATURE ID: 1]

, comprising [TRANSITIVE ID: 5]

: storing [TRANSITIVE ID: 2]

, in a rules database [FEATURE ID: 6]

, a plurality of rules , wherein each rule [FEATURE ID: 6]

identifies an action sequence [FEATURE ID: 7]

; storing , in a policy database [FEATURE ID: 1]

, a plurality of assessment policies [FEATURE ID: 8]

, wherein each assessment policy includes at least one rule of the plurality of rules ; identifying [TRANSITIVE ID: 2]

, using [TRANSITIVE ID: 9]

at least one assessment policy , a runtime risk for an application program that executes on a device , wherein the identified runtime risk indicates a risk [FEATURE ID: 3]

or threat [FEATURE ID: 3]

of the identified action sequence of the application [FEATURE ID: 1]

; and identifying , by a runtime monitor [FEATURE ID: 1]

including a processing device [FEATURE ID: 1]

, a behavior score [FEATURE ID: 6]

for the application program that executes on the device based on the identified runtime risk , wherein the action sequence is a sequence [FEATURE ID: 10]

of at least two performed actions [FEATURE ID: 11]

, and each performed action [FEATURE ID: 12]

is at least one of : a user action , an application action , and a system action . 2 . The method of claim [FEATURE ID: 13]

1 , wherein the user action is any form [FEATURE ID: 1]

of explicit input [FEATURE ID: 3]

from a user [FEATURE ID: 1]

of a computing system [FEATURE ID: 1]

. 3 . The method of claim 1 , wherein the application action is any activity [FEATURE ID: 12]

performed by the application initiated programmatically by a task [FEATURE ID: 1]

in execution [FEATURE ID: 1]

of a computing system . 4 . The method of claim 1 , wherein the system action is any operation [FEATURE ID: 1]

performed by a computing system on behalf [FEATURE ID: 14]

of , or as a consequence of , a user action or application action that changes the state [FEATURE ID: 3]

1 . A computer [FEATURE ID: 1]

- implemented method [FEATURE ID: 1]

of monitoring [TRANSITIVE ID: 2]

user activity [FEATURE ID: 3]

in a database system [FEATURE ID: 6]

, the method comprising [TRANSITIVE ID: 5]

: recording [TRANSITIVE ID: 2]

data access events [FEATURE ID: 8]

associated with a user [FEATURE ID: 1]

accessing [TRANSITIVE ID: 9]

data maintained by the database system , resulting in recorded events [FEATURE ID: 11]

; comparing [TRANSITIVE ID: 2]

characteristics of the recorded events for a designated period of time to corresponding characteristics of a nominal event activity profile [FEATURE ID: 6]

for the designated period of time ; and initiating a course [FEATURE ID: 1]

of action [FEATURE ID: 12]

when the characteristics of the recorded events diverge from the nominal event activity profile . 2 . The method of claim [FEATURE ID: 13]

1 , wherein : the database system comprises a multi-tenant customer relationship management ( CRM ) system [FEATURE ID: 1]

; and the user is an authenticated user of the multi-tenant CRM system [FEATURE ID: 4]

. 3 . The method of claim 1 , further comprising deriving the nominal event activity profile from historical data access behavior [FEATURE ID: 3]

of the user . 4 . The method of claim 1 , further comprising deriving the nominal event activity profile from collective historical data access behavior [FEATURE ID: 3]

of a peer group of the user . 5 . The method of claim 1 , further comprising dynamically updating the nominal event activity profile in response to ongoing data access behavior [FEATURE ID: 3]

of the user . 6 . The method of claim 1 , further comprising : maintaining a respective score for each of a plurality of monitored data access events , resulting in a set [FEATURE ID: 10]

of scores for the user ; and in response to each recorded event [FEATURE ID: 7]

, adjusting the set of scores to obtain an updated set of scores for the user , wherein comparing characteristics of the recorded events comprises comparing the updated set of scores to a scoring profile [FEATURE ID: 6]

assigned to the user . 7 . The method of claim 1 , further comprising : maintaining a respective score for each of a plurality of monitored data access events , resulting in a set of scores for the user ; and in response to each recorded event , adjusting the set of scores to obtain an updated set of scores for the user , wherein comparing characteristics of the recorded events comprises comparing the updated set of scores to a scoring profile assigned to a peer group of the user . 8 . A computer - implemented method of monitoring data access activity of a user of a system , the method comprising : maintaining a respective score for each of a plurality of monitored data access events , resulting in a set of scores for the user ; monitoring behavior [FEATURE ID: 2]

of the user to detect occurrences [FEATURE ID: 14]

of the monitored data access events ; updating the set of scores in response to detected occurrences of the monitored data access events , resulting in an updated set of scores ; and initiating a course of action when the updated set of scores is indicative of unauthorized data access activity [FEATURE ID: 3]

. 9 . The method of claim 8 , further comprising recording the detected occurrences of the monitored data access events , along with application data linked with the detected occurrences of the monitored data access events . 10 . The method of claim 8 , further comprising comparing the updated set of scores to a nominal event activity profile associated with the user , wherein initiating the course of action is performed when the updated set of scores diverges from the nominal event activity profile by at least a threshold amount . 11 . The method of claim 10 , further comprising deriving the nominal event activity profile from historical data access behavior of the user . 12 . The method of claim 8 , further comprising resetting the updated set of scores in accordance with a predetermined schedule [FEATURE ID: 1]

. 13 . The method of claim 12 , wherein resetting the updated set of scores comprises initializing the respective score for each of the plurality of monitored data access events on a daily basis [FEATURE ID: 1]