<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[vyasastrategy]]></provider_name><provider_url><![CDATA[https://vyasastrategy.wordpress.com]]></provider_url><author_name><![CDATA[vvyasa]]></author_name><author_url><![CDATA[https://vyasastrategy.wordpress.com/author/vvyasa/]]></author_url><title><![CDATA[Rl vs Chunking(EBL) In&nbsp;SOAR]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>Reinforcement learning (RL) in Soar allows agents to alter behavior over time by<br />
dynamically changing numerical indifferent preferences in procedural memory in response<br />
to a reward signal.</p>
<p>This learning mechanism contrasts starkly with chunking.  Whereas<br />
chunking is a one-shot form of learning that increases agent execution performance by<br />
summarizing sub-goal results, RL is an incremental form of learning that probabilistically<br />
alters agent behavior.</p>
]]></html></oembed>