JESS: On the Performance of Logical Retractions

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

JESS: On the Performance of Logical Retractions

Md Oliya
Hi,

I am doing some experiments with a set of rules which contain the "logical" CE.
I intend to see the performance of Jess on a set of assertions as well as retractions.

After some experiments, I found that the runtime for assertions is much less than that of retractions.
In fact, the performance on retractions is so bad that I would rather re (run) jess on a retracted kb.


A sample test case:
The KB size,  number of assertions, number of retractions, and number of rules are 100K, 50K, 1k, and 100, respectively.
runtimes are >> initial run: 860ms,  assertions:320ms --  retractions: 4s.


Would you please give some hints on the reason?


Thanks in advance.
--Oli.
Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Friedman-Hill, Ernest
I don't think there's a particular reason in general. Retracting a  
fact takes only a little longer than asserting one, on average. But if  
we assume liberal use of "logical", retracting a single fact could  
result in a sort of "cascade effect" whereby retracting a single fact  
would result in many other facts, and many activations, being removed  
also due to dependencies.  All of that would take time.  Still, your  
case seems extreme. Maybe there's something pathological about this  
particular case.


On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:

> Hi,
>
> I am doing some experiments with a set of rules which contain the  
> "logical" CE.
> I intend to see the performance of Jess on a set of assertions as  
> well as retractions.
>
> After some experiments, I found that the runtime for assertions is  
> much less than that of retractions.
> In fact, the performance on retractions is so bad that I would  
> rather re (run) jess on a retracted kb.
>
>
> A sample test case:
> The KB size,  number of assertions, number of retractions, and  
> number of rules are 100K, 50K, 1k, and 100, respectively.
> runtimes are >> initial run: 860ms,  assertions:320ms --  
> retractions: 4s.
>
>
> Would you please give some hints on the reason?
>
>
> Thanks in advance.
> --Oli.

---------------------------------------------------------
Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Nessrine Nassou
Hi to all, i need help please. How can i import the jess class "Rete" in java application? 


thanks for help 


From: Ernest Friedman-Hill <[hidden email]>
To: [hidden email]
Sent: Mon, June 6, 2011 1:37:16 PM
Subject: Re: JESS: On the Performance of Logical Retractions

I don't think there's a particular reason in general. Retracting a fact takes only a little longer than asserting one, on average. But if we assume liberal use of "logical", retracting a single fact could result in a sort of "cascade effect" whereby retracting a single fact would result in many other facts, and many activations, being removed also due to dependencies.  All of that would take time.  Still, your case seems extreme. Maybe there's something pathological about this particular case.


On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:

> Hi,
>
> I am doing some experiments with a set of rules which contain the "logical" CE.
> I intend to see the performance of Jess on a set of assertions as well as retractions.
>
> After some experiments, I found that the runtime for assertions is much less than that of retractions.
> In fact, the performance on retractions is so bad that I would rather re (run) jess on a retracted kb.
>
>
> A sample test case:
> The KB size,  number of assertions, number of retractions, and number of rules are 100K, 50K, 1k, and 100, respectively.
> runtimes are >> initial run: 860ms,  assertions:320ms --  retractions: 4s.
>
>
> Would you please give some hints on the reason?
>
>
> Thanks in advance.
> --Oli.

---------------------------------------------------------
Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Jason Morris
I got this one, Ernest :-)

Try

   import jess.*;

On Wed, Jun 8, 2011 at 4:22 AM, Nessrine Nassou <[hidden email]> wrote:
Hi to all, i need help please. How can i import the jess class "Rete" in java application? 


thanks for help 


From: Ernest Friedman-Hill <[hidden email]>
To: [hidden email]
Sent: Mon, June 6, 2011 1:37:16 PM
Subject: Re: JESS: On the Performance of Logical Retractions

I don't think there's a particular reason in general. Retracting a fact takes only a little longer than asserting one, on average. But if we assume liberal use of "logical", retracting a single fact could result in a sort of "cascade effect" whereby retracting a single fact would result in many other facts, and many activations, being removed also due to dependencies.  All of that would take time.  Still, your case seems extreme. Maybe there's something pathological about this particular case.


On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:

> Hi,
>
> I am doing some experiments with a set of rules which contain the "logical" CE.
> I intend to see the performance of Jess on a set of assertions as well as retractions.
>
> After some experiments, I found that the runtime for assertions is much less than that of retractions.
> In fact, the performance on retractions is so bad that I would rather re (run) jess on a retracted kb.
>
>
> A sample test case:
> The KB size,  number of assertions, number of retractions, and number of rules are 100K, 50K, 1k, and 100, respectively.
> runtimes are >> initial run: 860ms,  assertions:320ms --  retractions: 4s.
>
>
> Would you please give some hints on the reason?
>
>
> Thanks in advance.
> --Oli.

---------------------------------------------------------
Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------




--
Cheers,
Jason
------------------------------------------------------
Morris Technical Solutions LLC
[hidden email]
(517) 304-5883
Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Md Oliya
In reply to this post by Friedman-Hill, Ernest
Thank you Ernest. 

I am experimenting with the Lehigh university benchmark, where i transfer OWL TBox into their equivalent rules in Jess, with the logical construct. Specifically, I am using the dataset and transformations, as used in the OpenRuleBench

As for the runtimes, I missed a point about the retractions. The fact is, even if the session does not contain any rules (no defrules, just assertions), loading the same set of retractions takes a considerable time. This indicates that the high runtime is mostly incurred by jess internal operations. 
but still, when the number of changes grows high (say more than 10%) the runtime is not acceptable, and rerunning with the retracted kb would be faster. 

I have another question as well: what type of truth maintenance method is implemented in jess? Do you solely rely on the Rete memory nodes and tokens for this purpose?


--Oli.


On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill <[hidden email]> wrote:
I don't think there's a particular reason in general. Retracting a fact takes only a little longer than asserting one, on average. But if we assume liberal use of "logical", retracting a single fact could result in a sort of "cascade effect" whereby retracting a single fact would result in many other facts, and many activations, being removed also due to dependencies.  All of that would take time.  Still, your case seems extreme. Maybe there's something pathological about this particular case.



On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:

Hi,

I am doing some experiments with a set of rules which contain the "logical" CE.
I intend to see the performance of Jess on a set of assertions as well as retractions.

After some experiments, I found that the runtime for assertions is much less than that of retractions.
In fact, the performance on retractions is so bad that I would rather re (run) jess on a retracted kb.


A sample test case:
The KB size,  number of assertions, number of retractions, and number of rules are 100K, 50K, 1k, and 100, respectively.
runtimes are >> initial run: 860ms,  assertions:320ms --  retractions: 4s.


Would you please give some hints on the reason?


Thanks in advance.
--Oli.

---------------------------------------------------------
Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Peter Lin
Although it "may" be obvious to some people, I thought I'd mention
this well known lesson.

Do not load huge knowledge base into memory. This lesson is well
documented in existing literature on knowledge base systems. it's also
been discussed on JESS mailing list numerous times over the years, so
I would suggest searching JESS mailing list to learn from other
people's experience.

It's better to intelligently load knowledge base into memory as
needed, rather than blindly load everything. Even in the case where
someone has 256Gb of memory, one should ask "why load all that into
memory up front".

If the test is using RDF triples, it's well known that RDF triples
produces excessive partial matches and often results in
OutOfMemoryException. The real issue isn't JESS, it's how one tries to
solve a problem. I would recommend reading Gary Riley's book on expert
systems to avoid repeating a lot of mistakes that others have already
documented.


On Thu, Jun 9, 2011 at 11:41 AM, Md Oliya <[hidden email]> wrote:

> Thank you Ernest.
> I am experimenting with the Lehigh university benchmark, where i transfer
> OWL TBox into their equivalent rules in Jess, with the logical construct.
> Specifically, I am using the dataset and transformations, as used in the
> OpenRuleBench.
> As for the runtimes, I missed a point about the retractions. The fact is,
> even if the session does not contain any rules (no defrules, just
> assertions), loading the same set of retractions takes a considerable time.
> This indicates that the high runtime is mostly incurred by jess internal
> operations.
> but still, when the number of changes grows high (say more than 10%) the
> runtime is not acceptable, and rerunning with the retracted kb would be
> faster.
> I have another question as well: what type of truth maintenance method is
> implemented in jess? Do you solely rely on the Rete memory nodes and tokens
> for this purpose?
>
> --Oli.
>
>
> On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill <[hidden email]>
> wrote:
>>
>> I don't think there's a particular reason in general. Retracting a fact
>> takes only a little longer than asserting one, on average. But if we assume
>> liberal use of "logical", retracting a single fact could result in a sort of
>> "cascade effect" whereby retracting a single fact would result in many other
>> facts, and many activations, being removed also due to dependencies.  All of
>> that would take time.  Still, your case seems extreme. Maybe there's
>> something pathological about this particular case.
>>
>>
>> On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:
>>
>>> Hi,
>>>
>>> I am doing some experiments with a set of rules which contain the
>>> "logical" CE.
>>> I intend to see the performance of Jess on a set of assertions as well as
>>> retractions.
>>>
>>> After some experiments, I found that the runtime for assertions is much
>>> less than that of retractions.
>>> In fact, the performance on retractions is so bad that I would rather re
>>> (run) jess on a retracted kb.
>>>
>>>
>>> A sample test case:
>>> The KB size,  number of assertions, number of retractions, and number of
>>> rules are 100K, 50K, 1k, and 100, respectively.
>>> runtimes are >> initial run: 860ms,  assertions:320ms --  retractions:
>>> 4s.
>>>
>>>
>>> Would you please give some hints on the reason?
>>>
>>>
>>> Thanks in advance.
>>> --Oli.
>>
>> ---------------------------------------------------------
>> Ernest Friedman-Hill
>> Informatics & Decision Sciences, Sandia National Laboratories
>> PO Box 969, MS 9012, Livermore, CA 94550
>> http://www.jessrules.com
>>
>>
>>
>>
>>
>>
>>
>> --------------------------------------------------------------------
>> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
>> in the BODY of a message to [hidden email], NOT to the list
>> (use your own address!) List problems? Notify [hidden email].
>> --------------------------------------------------------------------
>>
>
>




--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Md Oliya
Thank you very much Peter for the useful information. I will definitely look into that.
but in the context of this message, i am not loading a huge (subjective interpretation?) knowledge base. It's 100k assertions, with the operations taking around 400 MB.
Secondly, in my experiments, I subtracted the loading time of the assertions/retractions in jess, as I'm focusing on the performance of the Rete. 
Lastly, I am not doing an RDF based mapping; rather, I follow the method of Description Logic Programs for translating each Class/Property of OWL into its corresponding template. 


--Oli.


On Fri, Jun 10, 2011 at 12:03 AM, Peter Lin <[hidden email]> wrote:
Although it "may" be obvious to some people, I thought I'd mention
this well known lesson.

Do not load huge knowledge base into memory. This lesson is well
documented in existing literature on knowledge base systems. it's also
been discussed on JESS mailing list numerous times over the years, so
I would suggest searching JESS mailing list to learn from other
people's experience.

It's better to intelligently load knowledge base into memory as
needed, rather than blindly load everything. Even in the case where
someone has 256Gb of memory, one should ask "why load all that into
memory up front".

If the test is using RDF triples, it's well known that RDF triples
produces excessive partial matches and often results in
OutOfMemoryException. The real issue isn't JESS, it's how one tries to
solve a problem. I would recommend reading Gary Riley's book on expert
systems to avoid repeating a lot of mistakes that others have already
documented.


On Thu, Jun 9, 2011 at 11:41 AM, Md Oliya <[hidden email]> wrote:
> Thank you Ernest.
> I am experimenting with the Lehigh university benchmark, where i transfer
> OWL TBox into their equivalent rules in Jess, with the logical construct.
> Specifically, I am using the dataset and transformations, as used in the
> OpenRuleBench.
> As for the runtimes, I missed a point about the retractions. The fact is,
> even if the session does not contain any rules (no defrules, just
> assertions), loading the same set of retractions takes a considerable time.
> This indicates that the high runtime is mostly incurred by jess internal
> operations.
> but still, when the number of changes grows high (say more than 10%) the
> runtime is not acceptable, and rerunning with the retracted kb would be
> faster.
> I have another question as well: what type of truth maintenance method is
> implemented in jess? Do you solely rely on the Rete memory nodes and tokens
> for this purpose?
>
> --Oli.
>
>
> On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill <[hidden email]>
> wrote:
>>
>> I don't think there's a particular reason in general. Retracting a fact
>> takes only a little longer than asserting one, on average. But if we assume
>> liberal use of "logical", retracting a single fact could result in a sort of
>> "cascade effect" whereby retracting a single fact would result in many other
>> facts, and many activations, being removed also due to dependencies.  All of
>> that would take time.  Still, your case seems extreme. Maybe there's
>> something pathological about this particular case.
>>
>>
>> On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:
>>
>>> Hi,
>>>
>>> I am doing some experiments with a set of rules which contain the
>>> "logical" CE.
>>> I intend to see the performance of Jess on a set of assertions as well as
>>> retractions.
>>>
>>> After some experiments, I found that the runtime for assertions is much
>>> less than that of retractions.
>>> In fact, the performance on retractions is so bad that I would rather re
>>> (run) jess on a retracted kb.
>>>
>>>
>>> A sample test case:
>>> The KB size,  number of assertions, number of retractions, and number of
>>> rules are 100K, 50K, 1k, and 100, respectively.
>>> runtimes are >> initial run: 860ms,  assertions:320ms --  retractions:
>>> 4s.
>>>
>>>
>>> Would you please give some hints on the reason?
>>>
>>>
>>> Thanks in advance.
>>> --Oli.
>>
>> ---------------------------------------------------------
>> Ernest Friedman-Hill
>> Informatics & Decision Sciences, Sandia National Laboratories
>> PO Box 969, MS 9012, Livermore, CA 94550
>> http://www.jessrules.com
>>
>>
>>
>>
>>
>>
>>
>> --------------------------------------------------------------------
>> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
>> in the BODY of a message to [hidden email], NOT to the list
>> (use your own address!) List problems? Notify [hidden email].
>> --------------------------------------------------------------------
>>
>
>




--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Friedman-Hill, Ernest
In reply to this post by Md Oliya
I think I need to see the actual test program, or otherwise we need to  
get on the same page somehow. As a counter example, here's a little  
program with no rules that asserts about 10,000 facts one at a time  
and then retracts them. It takes 1.9 seconds (including JVM startup)  
on my Macbook. If I comment out the "retract" part, it takes 1.6  
seconds. These would be faster if the facts weren't being parsed out  
of strings this way, twice, but regardless of that, this doesn't bear  
out the idea that retractions are pathologically slow.

(foreach ?a (create$ a b c d e f g h i j k l m nn o p q r s t u v w x  
y z)
          (foreach ?b (create$ a b c d e f g h i j k l m n o p q r s t  
u v w x y z)
                   (foreach ?c (create$ a b c d e f g h i j k l m n o  
p q r s t u v w x y z)
                            (bind ?x (str-cat ?a ?b ?c))
                            (assert-string (str-cat "(" ?x ")")))))

(foreach ?a (create$ a b c d e f g h i j k l m nn o p q r s t u v w x  
y z)
          (foreach ?b (create$ a b c d e f g h i j k l m n o p q r s t  
u v w x y z)
                   (foreach ?c (create$ a b c d e f g h i j k l m n o  
p q r s t u v w x y z)
                            (bind ?x (str-cat ?a ?b ?c))
                            (retract-string (str-cat "(" ?x ")")))))




On Jun 9, 2011, at 11:41 AM, Md Oliya wrote:

> Thank you Ernest.
>
> I am experimenting with the Lehigh university benchmark, where i  
> transfer OWL TBox into their equivalent rules in Jess, with the  
> logical construct. Specifically, I am using the dataset and  
> transformations, as used in the OpenRuleBench.
>
> As for the runtimes, I missed a point about the retractions. The  
> fact is, even if the session does not contain any rules (no  
> defrules, just assertions), loading the same set of retractions  
> takes a considerable time. This indicates that the high runtime is  
> mostly incurred by jess internal operations.
> but still, when the number of changes grows high (say more than 10%)  
> the runtime is not acceptable, and rerunning with the retracted kb  
> would be faster.
>
> I have another question as well: what type of truth maintenance  
> method is implemented in jess? Do you solely rely on the Rete memory  
> nodes and tokens for this purpose?
>
>
> --Oli.
>
>
> On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill <[hidden email]
> > wrote:
> I don't think there's a particular reason in general. Retracting a  
> fact takes only a little longer than asserting one, on average. But  
> if we assume liberal use of "logical", retracting a single fact  
> could result in a sort of "cascade effect" whereby retracting a  
> single fact would result in many other facts, and many activations,  
> being removed also due to dependencies.  All of that would take  
> time.  Still, your case seems extreme. Maybe there's something  
> pathological about this particular case.
>
>
>
> On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:
>
> Hi,
>
> I am doing some experiments with a set of rules which contain the  
> "logical" CE.
> I intend to see the performance of Jess on a set of assertions as  
> well as retractions.
>
> After some experiments, I found that the runtime for assertions is  
> much less than that of retractions.
> In fact, the performance on retractions is so bad that I would  
> rather re (run) jess on a retracted kb.
>
>
> A sample test case:
> The KB size,  number of assertions, number of retractions, and  
> number of rules are 100K, 50K, 1k, and 100, respectively.
> runtimes are >> initial run: 860ms,  assertions:320ms --  
> retractions: 4s.
>
>
> Would you please give some hints on the reason?
>
>
> Thanks in advance.
> --Oli.
>
> ---------------------------------------------------------
> Ernest Friedman-Hill
> Informatics & Decision Sciences, Sandia National Laboratories
> PO Box 969, MS 9012, Livermore, CA 94550
> http://www.jessrules.com
>
>
>
>
>
>
>
> --------------------------------------------------------------------
> To unsubscribe, send the words 'unsubscribe jess-users  
> [hidden email]'
> in the BODY of a message to [hidden email], NOT to the list
> (use your own address!) List problems? Notify [hidden email]
> .
> --------------------------------------------------------------------
>
>

---------------------------------------------------------
Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Peter Lin
In reply to this post by Md Oliya
By "performance of RETE" what are you referring to?

There are many aspects of RETE, which one must study carefully. It's
good that you're translating RDF to OWL, but the larger question is
why use OWL/RDF in the first place? Unless the knowledge easily fits
into axioms like "sky is blue" or typical RDF examples, there's no
benefit to storing or using RDF. My own bias perspective on RDF/OWL.

The real question isn't "should I use RETE or how does RETE perform".
The real question is "how do I solve the problem efficiently?"

I've built compliance engines for trading systems using JESS. I can
say from first hand experience, it's how you use the engine that has
the biggest factor. I've done things like load 500K records to check
compliance across a portfolio set with minimal latency for nightly
batch processes. the key though is taking time to study existing
literature and understanding things before jumping to a solution.

providing concrete examples of what your doing will likely get better
advice than making general statements.


On Thu, Jun 9, 2011 at 12:17 PM, Md Oliya <[hidden email]> wrote:

> Thank you very much Peter for the useful information. I will definitely look
> into that.
> but in the context of this message, i am not loading a huge (subjective
> interpretation?) knowledge base. It's 100k assertions, with the operations
> taking around 400 MB.
> Secondly, in my experiments, I subtracted the loading time of the
> assertions/retractions in jess, as I'm focusing on the performance of the
> Rete.
> Lastly, I am not doing an RDF based mapping; rather, I follow the method of
> Description Logic Programs for translating each Class/Property of OWL into
> its corresponding template.
>
>
> --Oli.
>
>
> On Fri, Jun 10, 2011 at 12:03 AM, Peter Lin <[hidden email]> wrote:
>>
>> Although it "may" be obvious to some people, I thought I'd mention
>> this well known lesson.
>>
>> Do not load huge knowledge base into memory. This lesson is well
>> documented in existing literature on knowledge base systems. it's also
>> been discussed on JESS mailing list numerous times over the years, so
>> I would suggest searching JESS mailing list to learn from other
>> people's experience.
>>
>> It's better to intelligently load knowledge base into memory as
>> needed, rather than blindly load everything. Even in the case where
>> someone has 256Gb of memory, one should ask "why load all that into
>> memory up front".
>>
>> If the test is using RDF triples, it's well known that RDF triples
>> produces excessive partial matches and often results in
>> OutOfMemoryException. The real issue isn't JESS, it's how one tries to
>> solve a problem. I would recommend reading Gary Riley's book on expert
>> systems to avoid repeating a lot of mistakes that others have already
>> documented.
>>
>>
>> On Thu, Jun 9, 2011 at 11:41 AM, Md Oliya <[hidden email]> wrote:
>> > Thank you Ernest.
>> > I am experimenting with the Lehigh university benchmark, where i
>> > transfer
>> > OWL TBox into their equivalent rules in Jess, with the logical
>> > construct.
>> > Specifically, I am using the dataset and transformations, as used in the
>> > OpenRuleBench.
>> > As for the runtimes, I missed a point about the retractions. The fact
>> > is,
>> > even if the session does not contain any rules (no defrules, just
>> > assertions), loading the same set of retractions takes a considerable
>> > time.
>> > This indicates that the high runtime is mostly incurred by jess internal
>> > operations.
>> > but still, when the number of changes grows high (say more than 10%) the
>> > runtime is not acceptable, and rerunning with the retracted kb would be
>> > faster.
>> > I have another question as well: what type of truth maintenance method
>> > is
>> > implemented in jess? Do you solely rely on the Rete memory nodes and
>> > tokens
>> > for this purpose?
>> >
>> > --Oli.
>> >
>> >
>> > On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill
>> > <[hidden email]>
>> > wrote:
>> >>
>> >> I don't think there's a particular reason in general. Retracting a fact
>> >> takes only a little longer than asserting one, on average. But if we
>> >> assume
>> >> liberal use of "logical", retracting a single fact could result in a
>> >> sort of
>> >> "cascade effect" whereby retracting a single fact would result in many
>> >> other
>> >> facts, and many activations, being removed also due to dependencies.
>> >>  All of
>> >> that would take time.  Still, your case seems extreme. Maybe there's
>> >> something pathological about this particular case.
>> >>
>> >>
>> >> On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>> I am doing some experiments with a set of rules which contain the
>> >>> "logical" CE.
>> >>> I intend to see the performance of Jess on a set of assertions as well
>> >>> as
>> >>> retractions.
>> >>>
>> >>> After some experiments, I found that the runtime for assertions is
>> >>> much
>> >>> less than that of retractions.
>> >>> In fact, the performance on retractions is so bad that I would rather
>> >>> re
>> >>> (run) jess on a retracted kb.
>> >>>
>> >>>
>> >>> A sample test case:
>> >>> The KB size,  number of assertions, number of retractions, and number
>> >>> of
>> >>> rules are 100K, 50K, 1k, and 100, respectively.
>> >>> runtimes are >> initial run: 860ms,  assertions:320ms --  retractions:
>> >>> 4s.
>> >>>
>> >>>
>> >>> Would you please give some hints on the reason?
>> >>>
>> >>>
>> >>> Thanks in advance.
>> >>> --Oli.
>> >>
>> >> ---------------------------------------------------------
>> >> Ernest Friedman-Hill
>> >> Informatics & Decision Sciences, Sandia National Laboratories
>> >> PO Box 969, MS 9012, Livermore, CA 94550
>> >> http://www.jessrules.com
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> --------------------------------------------------------------------
>> >> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
>> >> in the BODY of a message to [hidden email], NOT to the list
>> >> (use your own address!) List problems? Notify
>> >> [hidden email].
>> >> --------------------------------------------------------------------
>> >>
>> >
>> >
>>
>>
>>
>>
>> --------------------------------------------------------------------
>> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
>> in the BODY of a message to [hidden email], NOT to the list
>> (use your own address!) List problems? Notify [hidden email].
>> --------------------------------------------------------------------
>>
>
>




--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Md Oliya
@Peter: I werent interested to plug into Rete at first place, neither had "should I use RETE or how does RETE perform" in mind. Rather, I was trying to find a solution for my problem at hand, and the more and more i developed my own solution, i found it to be more and more similar to the Rete. So I intended not to reinvent the wheel, and tap into the existing implementations. By "performance of RETE" i mean the cost of building and maintaining the network and not the data storage and retrieval costs. 

@Ernest: I understand your point and i think the main problem would be the cascading effect incurred by liberal use of the logical keyword, as you mentioned. 

As said before, I am using the Open Rule Bench, which is a set of test cases for a number of rule engines such as XSB, Jess, and Jena (etc.). It is perfectly self contained and you can set it up and test the Jess within 15 minutes. 

But still I have a question:what type of truth maintenance method is implemented in jess? Do you solely rely on the Rete memory nodes and tokens for this purpose?


On Fri, Jun 10, 2011 at 1:21 AM, Peter Lin <[hidden email]> wrote:
By "performance of RETE" what are you referring to?

There are many aspects of RETE, which one must study carefully. It's
good that you're translating RDF to OWL, but the larger question is
why use OWL/RDF in the first place? Unless the knowledge easily fits
into axioms like "sky is blue" or typical RDF examples, there's no
benefit to storing or using RDF. My own bias perspective on RDF/OWL.

The real question isn't "should I use RETE or how does RETE perform".
The real question is "how do I solve the problem efficiently?"

I've built compliance engines for trading systems using JESS. I can
say from first hand experience, it's how you use the engine that has
the biggest factor. I've done things like load 500K records to check
compliance across a portfolio set with minimal latency for nightly
batch processes. the key though is taking time to study existing
literature and understanding things before jumping to a solution.

providing concrete examples of what your doing will likely get better
advice than making general statements.


On Thu, Jun 9, 2011 at 12:17 PM, Md Oliya <[hidden email]> wrote:
> Thank you very much Peter for the useful information. I will definitely look
> into that.
> but in the context of this message, i am not loading a huge (subjective
> interpretation?) knowledge base. It's 100k assertions, with the operations
> taking around 400 MB.
> Secondly, in my experiments, I subtracted the loading time of the
> assertions/retractions in jess, as I'm focusing on the performance of the
> Rete.
> Lastly, I am not doing an RDF based mapping; rather, I follow the method of
> Description Logic Programs for translating each Class/Property of OWL into
> its corresponding template.
>
>
> --Oli.
>
>
> On Fri, Jun 10, 2011 at 12:03 AM, Peter Lin <[hidden email]> wrote:
>>
>> Although it "may" be obvious to some people, I thought I'd mention
>> this well known lesson.
>>
>> Do not load huge knowledge base into memory. This lesson is well
>> documented in existing literature on knowledge base systems. it's also
>> been discussed on JESS mailing list numerous times over the years, so
>> I would suggest searching JESS mailing list to learn from other
>> people's experience.
>>
>> It's better to intelligently load knowledge base into memory as
>> needed, rather than blindly load everything. Even in the case where
>> someone has 256Gb of memory, one should ask "why load all that into
>> memory up front".
>>
>> If the test is using RDF triples, it's well known that RDF triples
>> produces excessive partial matches and often results in
>> OutOfMemoryException. The real issue isn't JESS, it's how one tries to
>> solve a problem. I would recommend reading Gary Riley's book on expert
>> systems to avoid repeating a lot of mistakes that others have already
>> documented.
>>
>>
>> On Thu, Jun 9, 2011 at 11:41 AM, Md Oliya <[hidden email]> wrote:
>> > Thank you Ernest.
>> > I am experimenting with the Lehigh university benchmark, where i
>> > transfer
>> > OWL TBox into their equivalent rules in Jess, with the logical
>> > construct.
>> > Specifically, I am using the dataset and transformations, as used in the
>> > OpenRuleBench.
>> > As for the runtimes, I missed a point about the retractions. The fact
>> > is,
>> > even if the session does not contain any rules (no defrules, just
>> > assertions), loading the same set of retractions takes a considerable
>> > time.
>> > This indicates that the high runtime is mostly incurred by jess internal
>> > operations.
>> > but still, when the number of changes grows high (say more than 10%) the
>> > runtime is not acceptable, and rerunning with the retracted kb would be
>> > faster.
>> > I have another question as well: what type of truth maintenance method
>> > is
>> > implemented in jess? Do you solely rely on the Rete memory nodes and
>> > tokens
>> > for this purpose?
>> >
>> > --Oli.
>> >
>> >
>> > On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill
>> > <[hidden email]>
>> > wrote:
>> >>
>> >> I don't think there's a particular reason in general. Retracting a fact
>> >> takes only a little longer than asserting one, on average. But if we
>> >> assume
>> >> liberal use of "logical", retracting a single fact could result in a
>> >> sort of
>> >> "cascade effect" whereby retracting a single fact would result in many
>> >> other
>> >> facts, and many activations, being removed also due to dependencies.
>> >>  All of
>> >> that would take time.  Still, your case seems extreme. Maybe there's
>> >> something pathological about this particular case.
>> >>
>> >>
>> >> On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>> I am doing some experiments with a set of rules which contain the
>> >>> "logical" CE.
>> >>> I intend to see the performance of Jess on a set of assertions as well
>> >>> as
>> >>> retractions.
>> >>>
>> >>> After some experiments, I found that the runtime for assertions is
>> >>> much
>> >>> less than that of retractions.
>> >>> In fact, the performance on retractions is so bad that I would rather
>> >>> re
>> >>> (run) jess on a retracted kb.
>> >>>
>> >>>
>> >>> A sample test case:
>> >>> The KB size,  number of assertions, number of retractions, and number
>> >>> of
>> >>> rules are 100K, 50K, 1k, and 100, respectively.
>> >>> runtimes are >> initial run: 860ms,  assertions:320ms --  retractions:
>> >>> 4s.
>> >>>
>> >>>
>> >>> Would you please give some hints on the reason?
>> >>>
>> >>>
>> >>> Thanks in advance.
>> >>> --Oli.
>> >>
>> >> ---------------------------------------------------------
>> >> Ernest Friedman-Hill
>> >> Informatics & Decision Sciences, Sandia National Laboratories
>> >> PO Box 969, MS 9012, Livermore, CA 94550
>> >> http://www.jessrules.com
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> --------------------------------------------------------------------
>> >> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
>> >> in the BODY of a message to [hidden email], NOT to the list
>> >> (use your own address!) List problems? Notify
>> >> [hidden email].
>> >> --------------------------------------------------------------------
>> >>
>> >
>> >
>>
>>
>>
>>
>> --------------------------------------------------------------------
>> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
>> in the BODY of a message to [hidden email], NOT to the list
>> (use your own address!) List problems? Notify [hidden email].
>> --------------------------------------------------------------------
>>
>
>




--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Peter Lin
I've looked at OpenRuleBench in the past and I just looked at it again
real quick.

The way the test was done is "the wrong way" to use a production rule
engine. That's my bias opinion. I understand the intent was to measure
the performance with the same data, and similar rules. The point I'm
trying to make is that encoding knowledge as triples is pointless and
useless for practical applications. Many researchers have extended
triples to quads and others convert complex object models to triples
back-and-forth. If knowledge naturally fits in a complex object, why
decompose it to triples or quads?

To draw an absurd anology. Would you dismantle your car every night to
store it away and then re-assemble it every morning?

Think of it this way, say we want to use Lego bricks to capture
knowledge. If the subject happens to work well with a 1x3 brick, then
all you need is 1x3 bricks. If the subject is complex, just 1x3 brick
probably isn't going to work. In the real world, there's a lot more
than 1x3 brick and the things we want to capture usually requires a
wide variety of bricks.

If you need to assert a bunch of facts and then retract 50% of those
facts, the first question should be "why am I doing that? and is that
a pointless exercise?" The first question I would ask is, "can I use
backward chaining or query approach instead?"


On Fri, Jun 10, 2011 at 12:58 AM, Md Oliya <[hidden email]> wrote:

> @Peter: I werent interested to plug into Rete at first place, neither
> had "should I use RETE or how does RETE perform" in mind. Rather, I was
> trying to find a solution for my problem at hand, and the more and more i
> developed my own solution, i found it to be more and more similar to the
> Rete. So I intended not to reinvent the wheel, and tap into the existing
> implementations. By "performance of RETE" i mean the cost of building and
> maintaining the network and not the data storage and retrieval costs.
> @Ernest: I understand your point and i think the main problem would be the
> cascading effect incurred by liberal use of the logical keyword, as you
> mentioned.
> As said before, I am using the Open Rule Bench, which is a set of test cases
> for a number of rule engines such as XSB, Jess, and Jena (etc.). It is
> perfectly self contained and you can set it up and test the Jess within 15
> minutes.
> But still I have a question:what type of truth maintenance method is
> implemented in jess? Do you solely rely on the Rete memory nodes and tokens
> for this purpose?
>
> On Fri, Jun 10, 2011 at 1:21 AM, Peter Lin <[hidden email]> wrote:
>>
>> By "performance of RETE" what are you referring to?
>>
>> There are many aspects of RETE, which one must study carefully. It's
>> good that you're translating RDF to OWL, but the larger question is
>> why use OWL/RDF in the first place? Unless the knowledge easily fits
>> into axioms like "sky is blue" or typical RDF examples, there's no
>> benefit to storing or using RDF. My own bias perspective on RDF/OWL.
>>
>> The real question isn't "should I use RETE or how does RETE perform".
>> The real question is "how do I solve the problem efficiently?"
>>
>> I've built compliance engines for trading systems using JESS. I can
>> say from first hand experience, it's how you use the engine that has
>> the biggest factor. I've done things like load 500K records to check
>> compliance across a portfolio set with minimal latency for nightly
>> batch processes. the key though is taking time to study existing
>> literature and understanding things before jumping to a solution.
>>
>> providing concrete examples of what your doing will likely get better
>> advice than making general statements.
>>
>>
>> On Thu, Jun 9, 2011 at 12:17 PM, Md Oliya <[hidden email]> wrote:
>> > Thank you very much Peter for the useful information. I will definitely
>> > look
>> > into that.
>> > but in the context of this message, i am not loading a huge (subjective
>> > interpretation?) knowledge base. It's 100k assertions, with the
>> > operations
>> > taking around 400 MB.
>> > Secondly, in my experiments, I subtracted the loading time of the
>> > assertions/retractions in jess, as I'm focusing on the performance of
>> > the
>> > Rete.
>> > Lastly, I am not doing an RDF based mapping; rather, I follow the method
>> > of
>> > Description Logic Programs for translating each Class/Property of OWL
>> > into
>> > its corresponding template.
>> >
>> >
>> > --Oli.
>> >
>> >
>> > On Fri, Jun 10, 2011 at 12:03 AM, Peter Lin <[hidden email]> wrote:
>> >>
>> >> Although it "may" be obvious to some people, I thought I'd mention
>> >> this well known lesson.
>> >>
>> >> Do not load huge knowledge base into memory. This lesson is well
>> >> documented in existing literature on knowledge base systems. it's also
>> >> been discussed on JESS mailing list numerous times over the years, so
>> >> I would suggest searching JESS mailing list to learn from other
>> >> people's experience.
>> >>
>> >> It's better to intelligently load knowledge base into memory as
>> >> needed, rather than blindly load everything. Even in the case where
>> >> someone has 256Gb of memory, one should ask "why load all that into
>> >> memory up front".
>> >>
>> >> If the test is using RDF triples, it's well known that RDF triples
>> >> produces excessive partial matches and often results in
>> >> OutOfMemoryException. The real issue isn't JESS, it's how one tries to
>> >> solve a problem. I would recommend reading Gary Riley's book on expert
>> >> systems to avoid repeating a lot of mistakes that others have already
>> >> documented.
>> >>
>> >>
>> >> On Thu, Jun 9, 2011 at 11:41 AM, Md Oliya <[hidden email]> wrote:
>> >> > Thank you Ernest.
>> >> > I am experimenting with the Lehigh university benchmark, where i
>> >> > transfer
>> >> > OWL TBox into their equivalent rules in Jess, with the logical
>> >> > construct.
>> >> > Specifically, I am using the dataset and transformations, as used in
>> >> > the
>> >> > OpenRuleBench.
>> >> > As for the runtimes, I missed a point about the retractions. The fact
>> >> > is,
>> >> > even if the session does not contain any rules (no defrules, just
>> >> > assertions), loading the same set of retractions takes a considerable
>> >> > time.
>> >> > This indicates that the high runtime is mostly incurred by jess
>> >> > internal
>> >> > operations.
>> >> > but still, when the number of changes grows high (say more than 10%)
>> >> > the
>> >> > runtime is not acceptable, and rerunning with the retracted kb would
>> >> > be
>> >> > faster.
>> >> > I have another question as well: what type of truth maintenance
>> >> > method
>> >> > is
>> >> > implemented in jess? Do you solely rely on the Rete memory nodes and
>> >> > tokens
>> >> > for this purpose?
>> >> >
>> >> > --Oli.
>> >> >
>> >> >
>> >> > On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill
>> >> > <[hidden email]>
>> >> > wrote:
>> >> >>
>> >> >> I don't think there's a particular reason in general. Retracting a
>> >> >> fact
>> >> >> takes only a little longer than asserting one, on average. But if we
>> >> >> assume
>> >> >> liberal use of "logical", retracting a single fact could result in a
>> >> >> sort of
>> >> >> "cascade effect" whereby retracting a single fact would result in
>> >> >> many
>> >> >> other
>> >> >> facts, and many activations, being removed also due to dependencies.
>> >> >>  All of
>> >> >> that would take time.  Still, your case seems extreme. Maybe there's
>> >> >> something pathological about this particular case.
>> >> >>
>> >> >>
>> >> >> On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:
>> >> >>
>> >> >>> Hi,
>> >> >>>
>> >> >>> I am doing some experiments with a set of rules which contain the
>> >> >>> "logical" CE.
>> >> >>> I intend to see the performance of Jess on a set of assertions as
>> >> >>> well
>> >> >>> as
>> >> >>> retractions.
>> >> >>>
>> >> >>> After some experiments, I found that the runtime for assertions is
>> >> >>> much
>> >> >>> less than that of retractions.
>> >> >>> In fact, the performance on retractions is so bad that I would
>> >> >>> rather
>> >> >>> re
>> >> >>> (run) jess on a retracted kb.
>> >> >>>
>> >> >>>
>> >> >>> A sample test case:
>> >> >>> The KB size,  number of assertions, number of retractions, and
>> >> >>> number
>> >> >>> of
>> >> >>> rules are 100K, 50K, 1k, and 100, respectively.
>> >> >>> runtimes are >> initial run: 860ms,  assertions:320ms --
>> >> >>>  retractions:
>> >> >>> 4s.
>> >> >>>
>> >> >>>
>> >> >>> Would you please give some hints on the reason?
>> >> >>>
>> >> >>>
>> >> >>> Thanks in advance.
>> >> >>> --Oli.
>> >> >>
>> >> >> ---------------------------------------------------------
>> >> >> Ernest Friedman-Hill
>> >> >> Informatics & Decision Sciences, Sandia National Laboratories
>> >> >> PO Box 969, MS 9012, Livermore, CA 94550
>> >> >> http://www.jessrules.com
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> --------------------------------------------------------------------
>> >> >> To unsubscribe, send the words 'unsubscribe jess-users
>> >> >> [hidden email]'
>> >> >> in the BODY of a message to [hidden email], NOT to the list
>> >> >> (use your own address!) List problems? Notify
>> >> >> [hidden email].
>> >> >> --------------------------------------------------------------------
>> >> >>
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >>
>> >> --------------------------------------------------------------------
>> >> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
>> >> in the BODY of a message to [hidden email], NOT to the list
>> >> (use your own address!) List problems? Notify
>> >> [hidden email].
>> >> --------------------------------------------------------------------
>> >>
>> >
>> >
>>
>>
>>
>>
>> --------------------------------------------------------------------
>> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
>> in the BODY of a message to [hidden email], NOT to the list
>> (use your own address!) List problems? Notify [hidden email].
>> --------------------------------------------------------------------
>>
>
>




--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Friedman-Hill, Ernest
Yeah, I just had a look too, and I think the report on their site says  
it all. Jess and Drools are at the bottom of their performance results  
for a reason -- because they're being misapplied. If your problem  
looks like the kinds of problems they're benchmarking, then by all  
means use one of the tools that scored well on their tests. Use the  
proper tool for the job at hand.


On Jun 10, 2011, at 8:33 AM, Peter Lin wrote:

> I've looked at OpenRuleBench in the past and I just looked at it again
> real quick.
>
> The way the test was done is "the wrong way" to use a production rule
> engine. That's my bias opinion. I understand the intent was to measure
> the performance with the same data, and similar rules. The point I'm
> trying to make is that encoding knowledge as triples is pointless and
> useless for practical applications. Many researchers have extended
> triples to quads and others convert complex object models to triples
> back-and-forth. If knowledge naturally fits in a complex object, why
> decompose it to triples or quads?
>
> To draw an absurd anology. Would you dismantle your car every night to
> store it away and then re-assemble it every morning?
>
> Think of it this way, say we want to use Lego bricks to capture
> knowledge. If the subject happens to work well with a 1x3 brick, then
> all you need is 1x3 bricks. If the subject is complex, just 1x3 brick
> probably isn't going to work. In the real world, there's a lot more
> than 1x3 brick and the things we want to capture usually requires a
> wide variety of bricks.
>
> If you need to assert a bunch of facts and then retract 50% of those
> facts, the first question should be "why am I doing that? and is that
> a pointless exercise?" The first question I would ask is, "can I use
> backward chaining or query approach instead?"
>
>
> On Fri, Jun 10, 2011 at 12:58 AM, Md Oliya <[hidden email]> wrote:
>> @Peter: I werent interested to plug into Rete at first place, neither
>> had "should I use RETE or how does RETE perform" in mind. Rather,  
>> I was
>> trying to find a solution for my problem at hand, and the more and  
>> more i
>> developed my own solution, i found it to be more and more similar  
>> to the
>> Rete. So I intended not to reinvent the wheel, and tap into the  
>> existing
>> implementations. By "performance of RETE" i mean the cost of  
>> building and
>> maintaining the network and not the data storage and retrieval costs.
>> @Ernest: I understand your point and i think the main problem would  
>> be the
>> cascading effect incurred by liberal use of the logical keyword, as  
>> you
>> mentioned.
>> As said before, I am using the Open Rule Bench, which is a set of  
>> test cases
>> for a number of rule engines such as XSB, Jess, and Jena (etc.). It  
>> is
>> perfectly self contained and you can set it up and test the Jess  
>> within 15
>> minutes.
>> But still I have a question:what type of truth maintenance method is
>> implemented in jess? Do you solely rely on the Rete memory nodes  
>> and tokens
>> for this purpose?
>>
>> On Fri, Jun 10, 2011 at 1:21 AM, Peter Lin <[hidden email]> wrote:
>>>
>>> By "performance of RETE" what are you referring to?
>>>
>>> There are many aspects of RETE, which one must study carefully. It's
>>> good that you're translating RDF to OWL, but the larger question is
>>> why use OWL/RDF in the first place? Unless the knowledge easily fits
>>> into axioms like "sky is blue" or typical RDF examples, there's no
>>> benefit to storing or using RDF. My own bias perspective on RDF/OWL.
>>>
>>> The real question isn't "should I use RETE or how does RETE  
>>> perform".
>>> The real question is "how do I solve the problem efficiently?"
>>>
>>> I've built compliance engines for trading systems using JESS. I can
>>> say from first hand experience, it's how you use the engine that has
>>> the biggest factor. I've done things like load 500K records to check
>>> compliance across a portfolio set with minimal latency for nightly
>>> batch processes. the key though is taking time to study existing
>>> literature and understanding things before jumping to a solution.
>>>
>>> providing concrete examples of what your doing will likely get  
>>> better
>>> advice than making general statements.
>>>
>>>
>>> On Thu, Jun 9, 2011 at 12:17 PM, Md Oliya <[hidden email]>  
>>> wrote:
>>>> Thank you very much Peter for the useful information. I will  
>>>> definitely
>>>> look
>>>> into that.
>>>> but in the context of this message, i am not loading a huge  
>>>> (subjective
>>>> interpretation?) knowledge base. It's 100k assertions, with the
>>>> operations
>>>> taking around 400 MB.
>>>> Secondly, in my experiments, I subtracted the loading time of the
>>>> assertions/retractions in jess, as I'm focusing on the  
>>>> performance of
>>>> the
>>>> Rete.
>>>> Lastly, I am not doing an RDF based mapping; rather, I follow the  
>>>> method
>>>> of
>>>> Description Logic Programs for translating each Class/Property of  
>>>> OWL
>>>> into
>>>> its corresponding template.
>>>>
>>>>
>>>> --Oli.
>>>>
>>>>
>>>> On Fri, Jun 10, 2011 at 12:03 AM, Peter Lin <[hidden email]>  
>>>> wrote:
>>>>>
>>>>> Although it "may" be obvious to some people, I thought I'd mention
>>>>> this well known lesson.
>>>>>
>>>>> Do not load huge knowledge base into memory. This lesson is well
>>>>> documented in existing literature on knowledge base systems.  
>>>>> it's also
>>>>> been discussed on JESS mailing list numerous times over the  
>>>>> years, so
>>>>> I would suggest searching JESS mailing list to learn from other
>>>>> people's experience.
>>>>>
>>>>> It's better to intelligently load knowledge base into memory as
>>>>> needed, rather than blindly load everything. Even in the case  
>>>>> where
>>>>> someone has 256Gb of memory, one should ask "why load all that  
>>>>> into
>>>>> memory up front".
>>>>>
>>>>> If the test is using RDF triples, it's well known that RDF triples
>>>>> produces excessive partial matches and often results in
>>>>> OutOfMemoryException. The real issue isn't JESS, it's how one  
>>>>> tries to
>>>>> solve a problem. I would recommend reading Gary Riley's book on  
>>>>> expert
>>>>> systems to avoid repeating a lot of mistakes that others have  
>>>>> already
>>>>> documented.
>>>>>
>>>>>
>>>>> On Thu, Jun 9, 2011 at 11:41 AM, Md Oliya <[hidden email]>  
>>>>> wrote:
>>>>>> Thank you Ernest.
>>>>>> I am experimenting with the Lehigh university benchmark, where i
>>>>>> transfer
>>>>>> OWL TBox into their equivalent rules in Jess, with the logical
>>>>>> construct.
>>>>>> Specifically, I am using the dataset and transformations, as  
>>>>>> used in
>>>>>> the
>>>>>> OpenRuleBench.
>>>>>> As for the runtimes, I missed a point about the retractions.  
>>>>>> The fact
>>>>>> is,
>>>>>> even if the session does not contain any rules (no defrules, just
>>>>>> assertions), loading the same set of retractions takes a  
>>>>>> considerable
>>>>>> time.
>>>>>> This indicates that the high runtime is mostly incurred by jess
>>>>>> internal
>>>>>> operations.
>>>>>> but still, when the number of changes grows high (say more than  
>>>>>> 10%)
>>>>>> the
>>>>>> runtime is not acceptable, and rerunning with the retracted kb  
>>>>>> would
>>>>>> be
>>>>>> faster.
>>>>>> I have another question as well: what type of truth maintenance
>>>>>> method
>>>>>> is
>>>>>> implemented in jess? Do you solely rely on the Rete memory  
>>>>>> nodes and
>>>>>> tokens
>>>>>> for this purpose?
>>>>>>
>>>>>> --Oli.
>>>>>>
>>>>>>
>>>>>> On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill
>>>>>> <[hidden email]>
>>>>>> wrote:
>>>>>>>
>>>>>>> I don't think there's a particular reason in general.  
>>>>>>> Retracting a
>>>>>>> fact
>>>>>>> takes only a little longer than asserting one, on average. But  
>>>>>>> if we
>>>>>>> assume
>>>>>>> liberal use of "logical", retracting a single fact could  
>>>>>>> result in a
>>>>>>> sort of
>>>>>>> "cascade effect" whereby retracting a single fact would result  
>>>>>>> in
>>>>>>> many
>>>>>>> other
>>>>>>> facts, and many activations, being removed also due to  
>>>>>>> dependencies.
>>>>>>> Â All of
>>>>>>> that would take time. Â Still, your case seems extreme. Maybe  
>>>>>>> there's
>>>>>>> something pathological about this particular case.
>>>>>>>
>>>>>>>
>>>>>>> On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I am doing some experiments with a set of rules which contain  
>>>>>>>> the
>>>>>>>> "logical" CE.
>>>>>>>> I intend to see the performance of Jess on a set of  
>>>>>>>> assertions as
>>>>>>>> well
>>>>>>>> as
>>>>>>>> retractions.
>>>>>>>>
>>>>>>>> After some experiments, I found that the runtime for  
>>>>>>>> assertions is
>>>>>>>> much
>>>>>>>> less than that of retractions.
>>>>>>>> In fact, the performance on retractions is so bad that I would
>>>>>>>> rather
>>>>>>>> re
>>>>>>>> (run) jess on a retracted kb.
>>>>>>>>
>>>>>>>>
>>>>>>>> A sample test case:
>>>>>>>> The KB size, Â number of assertions, number of retractions, and
>>>>>>>> number
>>>>>>>> of
>>>>>>>> rules are 100K, 50K, 1k, and 100, respectively.
>>>>>>>> runtimes are >> initial run: 860ms, Â assertions:320ms --
>>>>>>>> Â retractions:
>>>>>>>> 4s.
>>>>>>>>
>>>>>>>>
>>>>>>>> Would you please give some hints on the reason?
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks in advance.
>>>>>>>> --Oli.
>>>>>>>
>>>>>>> ---------------------------------------------------------
>>>>>>> Ernest Friedman-Hill
>>>>>>> Informatics & Decision Sciences, Sandia National Laboratories
>>>>>>> PO Box 969, MS 9012, Livermore, CA 94550
>>>>>>> http://www.jessrules.com
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --------------------------------------------------------------------
>>>>>>> To unsubscribe, send the words 'unsubscribe jess-users
>>>>>>> [hidden email]'
>>>>>>> in the BODY of a message to [hidden email], NOT to the  
>>>>>>> list
>>>>>>> (use your own address!) List problems? Notify
>>>>>>> [hidden email].
>>>>>>> --------------------------------------------------------------------
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --------------------------------------------------------------------
>>>>> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]
>>>>> '
>>>>> in the BODY of a message to [hidden email], NOT to the list
>>>>> (use your own address!) List problems? Notify
>>>>> [hidden email].
>>>>> --------------------------------------------------------------------
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>> --------------------------------------------------------------------
>>> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]
>>> '
>>> in the BODY of a message to [hidden email], NOT to the list
>>> (use your own address!) List problems? Notify [hidden email]
>>> .
>>> --------------------------------------------------------------------
>>>
>>
>>
>
>
>
>
> --------------------------------------------------------------------
> To unsubscribe, send the words 'unsubscribe jess-users  
> [hidden email]'
> in the BODY of a message to [hidden email], NOT to the list
> (use your own address!) List problems? Notify [hidden email]
> .
> --------------------------------------------------------------------

---------------------------------------------------------
Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com








--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Md Oliya
In reply to this post by Peter Lin
I understand your point Peter. But I think that the dismantling into subparts into simple and standard representation is one purpose of Owl to achieve interoperability. 

Anyhow, I am using Jess so as to take advantage of Rete's support for incrementally, not the traditional query answering functionality.  I know that for the latter task prolog style backward chaining is far more suitable. 

As for the 50% or so of changes, I think thats around the threshold after which one may reconsider applying Jess, or forward chaining style altogether. It would be better to use backward chaining after such considerable amount of changes. 

But still I have a question: what type of truth maintenance is supported in Jess? Can you provide links to more information please. 

On Jun 10, 2011, at 20:33, "Peter Lin" <[hidden email]> wrote:

I've looked at OpenRuleBench in the past and I just looked at it again
real quick.

The way the test was done is "the wrong way" to use a production rule
engine. That's my bias opinion. I understand the intent was to measure
the performance with the same data, and similar rules. The point I'm
trying to make is that encoding knowledge as triples is pointless and
useless for practical applications. Many researchers have extended
triples to quads and others convert complex object models to triples
back-and-forth. If knowledge naturally fits in a complex object, why
decompose it to triples or quads?

To draw an absurd anology. Would you dismantle your car every night to
store it away and then re-assemble it every morning?

Think of it this way, say we want to use Lego bricks to capture
knowledge. If the subject happens to work well with a 1x3 brick, then
all you need is 1x3 bricks. If the subject is complex, just 1x3 brick
probably isn't going to work. In the real world, there's a lot more
than 1x3 brick and the things we want to capture usually requires a
wide variety of bricks.

If you need to assert a bunch of facts and then retract 50% of those
facts, the first question should be "why am I doing that? and is that
a pointless exercise?" The first question I would ask is, "can I use
backward chaining or query approach instead?"


On Fri, Jun 10, 2011 at 12:58 AM, Md Oliya <[hidden email]> wrote:
@Peter: I werent interested to plug into Rete at first place, neither
had "should I use RETE or how does RETE perform" in mind. Rather, I was
trying to find a solution for my problem at hand, and the more and more i
developed my own solution, i found it to be more and more similar to the
Rete. So I intended not to reinvent the wheel, and tap into the existing
implementations. By "performance of RETE" i mean the cost of building and
maintaining the network and not the data storage and retrieval costs.
@Ernest: I understand your point and i think the main problem would be the
cascading effect incurred by liberal use of the logical keyword, as you
mentioned.
As said before, I am using the Open Rule Bench, which is a set of test cases
for a number of rule engines such as XSB, Jess, and Jena (etc.). It is
perfectly self contained and you can set it up and test the Jess within 15
minutes.
But still I have a question:what type of truth maintenance method is
implemented in jess? Do you solely rely on the Rete memory nodes and tokens
for this purpose?

On Fri, Jun 10, 2011 at 1:21 AM, Peter Lin <[hidden email]> wrote:

By "performance of RETE" what are you referring to?

There are many aspects of RETE, which one must study carefully. It's
good that you're translating RDF to OWL, but the larger question is
why use OWL/RDF in the first place? Unless the knowledge easily fits
into axioms like "sky is blue" or typical RDF examples, there's no
benefit to storing or using RDF. My own bias perspective on RDF/OWL.

The real question isn't "should I use RETE or how does RETE perform".
The real question is "how do I solve the problem efficiently?"

I've built compliance engines for trading systems using JESS. I can
say from first hand experience, it's how you use the engine that has
the biggest factor. I've done things like load 500K records to check
compliance across a portfolio set with minimal latency for nightly
batch processes. the key though is taking time to study existing
literature and understanding things before jumping to a solution.

providing concrete examples of what your doing will likely get better
advice than making general statements.


On Thu, Jun 9, 2011 at 12:17 PM, Md Oliya <[hidden email]> wrote:
Thank you very much Peter for the useful information. I will definitely
look
into that.
but in the context of this message, i am not loading a huge (subjective
interpretation?) knowledge base. It's 100k assertions, with the
operations
taking around 400 MB.
Secondly, in my experiments, I subtracted the loading time of the
assertions/retractions in jess, as I'm focusing on the performance of
the
Rete.
Lastly, I am not doing an RDF based mapping; rather, I follow the method
of
Description Logic Programs for translating each Class/Property of OWL
into
its corresponding template.


--Oli.


On Fri, Jun 10, 2011 at 12:03 AM, Peter Lin <[hidden email]> wrote:

Although it "may" be obvious to some people, I thought I'd mention
this well known lesson.

Do not load huge knowledge base into memory. This lesson is well
documented in existing literature on knowledge base systems. it's also
been discussed on JESS mailing list numerous times over the years, so
I would suggest searching JESS mailing list to learn from other
people's experience.

It's better to intelligently load knowledge base into memory as
needed, rather than blindly load everything. Even in the case where
someone has 256Gb of memory, one should ask "why load all that into
memory up front".

If the test is using RDF triples, it's well known that RDF triples
produces excessive partial matches and often results in
OutOfMemoryException. The real issue isn't JESS, it's how one tries to
solve a problem. I would recommend reading Gary Riley's book on expert
systems to avoid repeating a lot of mistakes that others have already
documented.


On Thu, Jun 9, 2011 at 11:41 AM, Md Oliya <[hidden email]> wrote:
Thank you Ernest.
I am experimenting with the Lehigh university benchmark, where i
transfer
OWL TBox into their equivalent rules in Jess, with the logical
construct.
Specifically, I am using the dataset and transformations, as used in
the
OpenRuleBench.
As for the runtimes, I missed a point about the retractions. The fact
is,
even if the session does not contain any rules (no defrules, just
assertions), loading the same set of retractions takes a considerable
time.
This indicates that the high runtime is mostly incurred by jess
internal
operations.
but still, when the number of changes grows high (say more than 10%)
the
runtime is not acceptable, and rerunning with the retracted kb would
be
faster.
I have another question as well: what type of truth maintenance
method
is
implemented in jess? Do you solely rely on the Rete memory nodes and
tokens
for this purpose?

--Oli.


On Mon, Jun 6, 2011 at 7:37 PM, Ernest Friedman-Hill
<[hidden email]>
wrote:

I don't think there's a particular reason in general. Retracting a
fact
takes only a little longer than asserting one, on average. But if we
assume
liberal use of "logical", retracting a single fact could result in a
sort of
"cascade effect" whereby retracting a single fact would result in
many
other
facts, and many activations, being removed also due to dependencies.
 All of
that would take time.  Still, your case seems extreme. Maybe there's
something pathological about this particular case.


On Jun 5, 2011, at 3:18 PM, Md Oliya wrote:

Hi,

I am doing some experiments with a set of rules which contain the
"logical" CE.
I intend to see the performance of Jess on a set of assertions as
well
as
retractions.

After some experiments, I found that the runtime for assertions is
much
less than that of retractions.
In fact, the performance on retractions is so bad that I would
rather
re
(run) jess on a retracted kb.


A sample test case:
The KB size,  number of assertions, number of retractions, and
number
of
rules are 100K, 50K, 1k, and 100, respectively.
runtimes are >> initial run: 860ms,  assertions:320ms --
 retractions:
4s.


Would you please give some hints on the reason?


Thanks in advance.
--Oli.

---------------------------------------------------------
Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users
[hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify
[hidden email].
--------------------------------------------------------------------







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify
[hidden email].
--------------------------------------------------------------------







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Friedman-Hill, Ernest

On Jun 11, 2011, at 6:11 AM, Oliya wrote:

>
> But still I have a question: what type of truth maintenance is  
> supported in Jess? Can you provide links to more information please.


The "logical" conditional element is the only form of truth  
maintenance in Jess. I thought you said you were already using it?


> --------------------------------------------------------

Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Md Oliya
I meant more information on details of implementation, or the algorithm used. 



On Sat, Jun 11, 2011 at 8:19 PM, Ernest Friedman-Hill <[hidden email]> wrote:

On Jun 11, 2011, at 6:11 AM, Oliya wrote:


But still I have a question: what type of truth maintenance is supported in Jess? Can you provide links to more information please.


The "logical" conditional element is the only form of truth maintenance in Jess. I thought you said you were already using it?



--------------------------------------------------------

Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------


Reply | Threaded
Open this post in threaded view
|

RE: JESS: On the Performance of Logical Retractions

John Everett-2
In reply to this post by Friedman-Hill, Ernest
If truth maintenance is a central part of your architecture, I recommend
Building Problem Solvers, by Kenneth Forbus and Johan de Kleer.  It's on
Amazon:

http://www.amazon.com/Building-Problem-Solvers-Artificial-Intelligence/dp/02
62061570/ref=sr_1_1?ie=UTF8&qid=1307815663&sr=8-1

and you can find the source code for the truth maintenance systems described
in the book here:

http://www.qrg.northwestern.edu/BPS/readme.html

As part of my PhD work, I developed a reasoning system based on the LTRE, a
forward-chaining rule engine on top of a logic-based TMS that is described
in Building Problem Solvers. Coming from this background, I continually find
Jess to be a Swiss Army knife of capabilities. However, if the logical
conditional in Jess is not sufficient for your architecture, you'll probably
need to implement a separate TMS layer. The logic-based TMS, which does fast
(but incomplete) Boolean constraint propagation, provides a good balance
between expressivity and efficiency.

The problem solver architectures presented in Building Problem Solvers use
the rule engine's rules to construct a problem-specific dependency network,
through which the TMS propagates truth values.  For example, the CyclePad
system

http://www.qrg.northwestern.edu/projects/NSF/Cyclepad/aboutcp.html

enables the user to assemble and analyze thermodynamic cycles from a palette
of devices (turbines, pumps, heaters, throttles, coolers, etc). Once the
user has completed the cycle design, CyclePad runs its knowledge base of
rules to generate a dependency network that captures the relationships among
the thermodynamic properties at the inlet and outlet of each device. The
user can choose the working fluid for the system, and this imposes further
logical dependencies. For example, water will condense at certain
combinations of pressure and temperature. The user analyzes the system by
making assumptions about thermodynamic properties that the system then
propagates through the dependency network.



-John



-----Original Message-----
From: Ernest Friedman-Hill [mailto:[hidden email]]
Sent: Saturday, June 11, 2011 8:20 AM
To: [hidden email]
Subject: Re: JESS: On the Performance of Logical Retractions


On Jun 11, 2011, at 6:11 AM, Oliya wrote:

>
> But still I have a question: what type of truth maintenance is
> supported in Jess? Can you provide links to more information please.


The "logical" conditional element is the only form of truth  
maintenance in Jess. I thought you said you were already using it?


> --------------------------------------------------------

Ernest Friedman-Hill
Informatics & Decision Sciences, Sandia National Laboratories
PO Box 969, MS 9012, Livermore, CA 94550
http://www.jessrules.com







--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------




--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------

Reply | Threaded
Open this post in threaded view
|

Re: JESS: On the Performance of Logical Retractions

Peter Lin
I'll second that advice. There's other resources on TMS. I've used
this page in the past, which provides a high level explanation of
different types of TMS
http://www.cis.temple.edu/~ingargio/cis587/readings/tms.html. Read as
much as you can on TMS if that's critical to your research. ACMQueue
also has lots of papers on TMS.

choosing the right TMS to solve your problem isn't easy and will
likely take lots of effort, trial and error. There's no short cut and
using TMS correctly in a real application is quite challenging. Most
of the business rules applications I've worked on and projects friends
have worked on generally don't use logical TMS. Usually I see people
use it in a simple proof of concept, but as the project grows in
complexity, they remove it. Trying to wrap one's head around a
rulebase with hundreds or thousands rules with logical TMS quickly
becomes daunting even for an experience rule developer.

Without some kind of visual tool or analysis tool to examine the
logical dependencies, following the relationship in a 2K rule ruleset
gets rather confusing.

On Sat, Jun 11, 2011 at 2:23 PM, John Everett <[hidden email]> wrote:

> If truth maintenance is a central part of your architecture, I recommend
> Building Problem Solvers, by Kenneth Forbus and Johan de Kleer.  It's on
> Amazon:
>
> http://www.amazon.com/Building-Problem-Solvers-Artificial-Intelligence/dp/02
> 62061570/ref=sr_1_1?ie=UTF8&qid=1307815663&sr=8-1
>
> and you can find the source code for the truth maintenance systems described
> in the book here:
>
> http://www.qrg.northwestern.edu/BPS/readme.html
>
> As part of my PhD work, I developed a reasoning system based on the LTRE, a
> forward-chaining rule engine on top of a logic-based TMS that is described
> in Building Problem Solvers. Coming from this background, I continually find
> Jess to be a Swiss Army knife of capabilities. However, if the logical
> conditional in Jess is not sufficient for your architecture, you'll probably
> need to implement a separate TMS layer. The logic-based TMS, which does fast
> (but incomplete) Boolean constraint propagation, provides a good balance
> between expressivity and efficiency.
>
> The problem solver architectures presented in Building Problem Solvers use
> the rule engine's rules to construct a problem-specific dependency network,
> through which the TMS propagates truth values.  For example, the CyclePad
> system
>
> http://www.qrg.northwestern.edu/projects/NSF/Cyclepad/aboutcp.html
>
> enables the user to assemble and analyze thermodynamic cycles from a palette
> of devices (turbines, pumps, heaters, throttles, coolers, etc). Once the
> user has completed the cycle design, CyclePad runs its knowledge base of
> rules to generate a dependency network that captures the relationships among
> the thermodynamic properties at the inlet and outlet of each device. The
> user can choose the working fluid for the system, and this imposes further
> logical dependencies. For example, water will condense at certain
> combinations of pressure and temperature. The user analyzes the system by
> making assumptions about thermodynamic properties that the system then
> propagates through the dependency network.
>
>
>
> -John
>
>
>
> -----Original Message-----
> From: Ernest Friedman-Hill [mailto:[hidden email]]
> Sent: Saturday, June 11, 2011 8:20 AM
> To: [hidden email]
> Subject: Re: JESS: On the Performance of Logical Retractions
>
>
> On Jun 11, 2011, at 6:11 AM, Oliya wrote:
>
>>
>> But still I have a question: what type of truth maintenance is
>> supported in Jess? Can you provide links to more information please.
>
>
> The "logical" conditional element is the only form of truth
> maintenance in Jess. I thought you said you were already using it?
>
>
>> --------------------------------------------------------
>
> Ernest Friedman-Hill
> Informatics & Decision Sciences, Sandia National Laboratories
> PO Box 969, MS 9012, Livermore, CA 94550
> http://www.jessrules.com
>
>
>
>
>
>
>
> --------------------------------------------------------------------
> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
> in the BODY of a message to [hidden email], NOT to the list
> (use your own address!) List problems? Notify [hidden email].
> --------------------------------------------------------------------
>
>
>
>
> --------------------------------------------------------------------
> To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
> in the BODY of a message to [hidden email], NOT to the list
> (use your own address!) List problems? Notify [hidden email].
> --------------------------------------------------------------------
>
>




--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [hidden email]'
in the BODY of a message to [hidden email], NOT to the list
(use your own address!) List problems? Notify [hidden email].
--------------------------------------------------------------------