# Q-Values Vs. Reward Functions

A gentle introduction to the fundamentals of reinforcement learning

In the context of Markov Decision Processes (MDPs), both Q-values and reward functions are fundamental concepts, but they represent different things:

## Reward Functions

### Serious Stuff

#### Definition:

A reward function $$r(s, a, s')$$ specifies the immediate reward an agent receives for taking action $$a$$ in state $$s$$ and transitioning to state $$s'$$.

$r=r(s, a, s')$

#### Details:

• Computation : these are not computed but predetermined, generally as a result of
• interaction with the sim/real environment to reach goal
• designer specified to reach goal
• Learned from data : Generally goal is not specified, and only because expert behaviour is specified in an environment. This is called the inverse reinforcement learning problem.
• Purpose : The reward function gives feedback to the agent about the quality of an action taken in a specific state with respect to the task at hand. It forms the basis of what an agent is trying to optimize: the cumulative reward over time.
• Variants: Rewards can be deterministic or stochastic, meaning for a given action in a state, the reward might always be the same (deterministic) or might vary (stochastic).

#### Representation:

In tabular settings, it’s often presented as a table or matrix, and in more complex scenarios (like continuous states and actions), it might be approximated or represented using function approximators.

## Q-Values

### Definition:

A Q-value, denoted $$Q(s, a)$$, indicates the expected cumulative reward of taking action $$a$$ in state $$s$$ and then acting optimally thereafter.

It is defined as:

$Q(s,a) = \sum_{s' \in S} P_a(s' \mid s)\ [r(s,a,s') + \gamma\ V(s') ]$

which follows directly from the Bellman Equation of state values:

$V(s) = \max_{a \in A(s)} Q(s,a)$

### Details:

• Computation : Q-values are learned using algorithms like Q-learning or SARSA, which are based on the Bellman equation. The update rule uses the reward function and the Q-values of subsequent states to iteratively refine the Q-value estimates. evaluated repeatedly until required precision is achieved.
• Purpose : Q-values help an agent decide which action is best to take in a given state by estimating the expected long-term reward. Essentially, a Q-value combines the immediate reward and the expected future rewards.
• Variants: Rewards can be deterministic or stochastic, meaning for a given action in a state, the reward might always be the same (deterministic) or might vary (stochastic).

#### Representation:

Like reward functions, Q-values can be tabular (in simpler environments) or approximated using neural networks or other function approximators in more complex scenarios (Deep Q Networks or DQN is an example where neural networks are used to approximate Q-values).

## Key Ideas

Q-Values and Reward functions are foundational blocks in understanding model-based methods in reinforcement learning. This section summarizes key ideas from these building blocks.

1. Q-Values vs. Reward Functions

In an MDP, there are two key functions that guide an agent’s actions:

• Reward Function: Provides immediate feedback for actions taken in specific states. It essentially tells the agent how “good” or “bad” its last action was. This feedback is based on the state transition and action.
• Q-Values (Action-Value Function): Denotes the expected cumulative future reward of taking a specific action in a particular state and then following an optimal policy thereafter.
2. Illustrating Q-Values

Imagine a robot in a simple 3x3 grid world. This robot must navigate obstacles to reach a goal. While the reward function might give the robot immediate feedback (e.g., -1 for every move, +10 for reaching the goal), the Q-value helps the robot decide the optimal path by accounting for both immediate and future rewards.

3. Dynamics of Reward Functions

Rewards can be deterministic (fixed for specific actions in states) or stochastic (varying even for the same action-state pair). For instance, a robot trying to pick up an object might mostly succeed and receive a positive reward. Still, occasionally, due to external factors, it might fail and receive a negative reward.

4. Iterative Evaluation of the Value Function

It’s crucial to differentiate between the reward and value functions when applying iterative algorithms like policy iteration or value iteration. While the reward function provides immediate feedback, the value function gets updated using the Bellman expectation equation, which integrates both immediate rewards and the anticipated value of future states.

5. Designing Reward Functions

Designing a reward function is a critical task. While one can, in principle, select an arbitrary reward function, the choice can drastically influence an agent’s behavior. A poorly chosen reward might even lead to unintended consequences. The challenge is to align the reward structure with desired outcomes.

6. Are Q-Values Goal-Dependent?

Indeed, Q-values can change if the goal of the task changes, leading to a different reward structure. For example, in our robot grid world, if the goal shifts to a different location, the Q-values for various actions will need to be recalculated or relearned.

7. Range of Q-Values

The range of Q-values depends on multiple factors: the reward function, discount factor, and task structure. While no fixed universal range applies to all scenarios, understanding these factors can help predict Q-value dynamics.