feat: continuous appraisal, need-deficit emotions, FloatEmotion.blend#1
feat: continuous appraisal, need-deficit emotions, FloatEmotion.blend#1
Conversation
Extend Appraisal to accept float values [0,1] alongside categorical strings (backwards compatible). Add intrinsic_pleasantness field (Scherer's 6th SEC check). Add appraisal_to_float_emotion() mapping continuous appraisal dimensions to 4D Hourglass FloatEmotion via Scherer → Cambria theoretical mapping. Add float_emotion_to_neuro_deltas() for inverse Lövheim mapping (FloatEmotion → dopamine/serotonin/adrenaline deltas). AI-Generated Change: - Model: Claude Opus 4.6 - Intent: enable principled cognitive appraisal in LILACS emotion pipeline - Impact: new public API (appraisal_to_float_emotion, float_emotion_to_neuro_deltas); 23 new tests; 0 regressions - Verified via: python -m pytest test/ (772 passed, 4 skipped) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New module emotion_algebra/needs.py: maps Max-Neef (9) and Murray (5) fundamental needs to Plutchik emotions when deficient. Grounded in appraisal theory — CIA drive deficits map to specific emotion axes (control→fear/anger, identity→sadness/disgust, arousal→boredom). New FloatEmotion.blend(*emotions, weights, scale): utility to compose multi-axis ideal vectors from named Plutchik emotions (e.g. joy+trust → pleasantness+aptitude quadrant). Replaces hand-tuned constant vectors. 22 new tests (test_needs.py + test_blend.py). 794 total passing. AI-Generated Change: - Model: Claude Opus 4.6 - Intent: enable principled need→emotion and multi-emotion composition in LILACS - Impact: new needs.py module; FloatEmotion.blend() classmethod; 22 new tests - Verified via: python -m pytest test/ (794 passed, 4 skipped) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add all 17 of Murray's (1938) psychogenic needs to the deficit emotion mapping: achievement, affiliation, aggression, autonomy, counteraction, defendance, deference, dominance, exhibition, harm_avoidance, infavoidance, nurturance, order, play, rejection, sentience, understanding. Each maps to the primary Plutchik emotion that arises when the need is blocked, following appraisal theory (CIA drive → emotion axis). AI-Generated Change: - Model: Claude Opus 4.6 - Intent: complete Murray's psychogenic need model for personality differentiation - Impact: MURRAY_DEFICIT_EMOTIONS expanded from 5 to 17 entries - Verified via: python -m pytest test/ (794 passed) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace magic strings with StrEnum classes for type safety and discoverability. CIADrive (control/identity/arousal), MaxNeefNeed (9 needs), MurrayNeed (17 needs). All backwards compatible — StrEnum values compare equal to their string equivalents. AI-Generated Change: - Model: Claude Opus 4.6 - Intent: eliminate magic strings across both repos - Impact: 3 new StrEnum classes; all dict keys use enum values - Verified via: python -m pytest test/ (794 passed) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add feature table entries and quick-reference examples for: - Continuous appraisal (appraisal_to_float_emotion, float_emotion_to_neuro_deltas) - Emotion blending (FloatEmotion.blend) - Need-deficit emotions (CIADrive, MaxNeefNeed, MurrayNeed enums) - Scientific references (Max-Neef, Murray, Lövheim) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR extends the emotion algebra library to support continuous-valued appraisals (not just discrete categories), introduces emotion blending through weighted vector averaging, and adds need-deficit-to-emotion mappings across three psychological frameworks: CIA meta-drives, Max-Neef fundamental needs, and Murray psychogenic needs. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Appraisal
participant FloatAppraisal
participant FloatEmotion
participant NeuroDeltas
User->>Appraisal: Construct with categorical/<br/>float fields
Appraisal->>Appraisal: to_float()
Appraisal-->>FloatAppraisal: Returns normalized [0,1]<br/>float values
User->>FloatEmotion: appraisal_to_float_emotion()
FloatAppraisal->>FloatEmotion: Compute 4-axis<br/>(sensitivity, attention,<br/>pleasantness, aptitude)
FloatEmotion-->>User: Return FloatEmotion
User->>NeuroDeltas: float_emotion_to_neuro_deltas()
FloatEmotion->>NeuroDeltas: Extract (dopamine,<br/>serotonin, adrenaline)<br/>deltas
NeuroDeltas-->>User: Return (float, float, float)
sequenceDiagram
participant User
participant NeedName as Need Name
participant EmotionDB as Emotion DB
participant Emotion
participant FloatEmotion
User->>NeedName: need_deficit_to_emotion()<br/>or need_deficit_to_float_emotion()
NeedName->>EmotionDB: Lookup in<br/>NEED_DEFICIT_EMOTIONS
EmotionDB-->>Emotion: Retrieve Plutchik<br/>primary emotion
alt Valid Need
Emotion-->>User: Return EmotionBase
User->>FloatEmotion: FloatEmotion.from_emotion()
FloatEmotion-->>User: Return FloatEmotion
else Unknown Need
EmotionDB-->>User: Return None
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greetings! The CI pipeline has delivered its findings. 🏗️I've aggregated the results of the automated checks for this PR below. 📋 Repo HealthChecking the repository's vital signs. 💓 Latest Version: ✅ 🔍 LintHere's the lowdown on the latest automated check. 📉 ❌ ruff: issues found — see job log 🔨 Build TestsThe build bots have finished their assembly. 🤖
❌ 3.10: Install OK, tests failed An automated high-five for your latest changes! 🖐️ |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@emotion_algebra/appraisal.py`:
- Around line 294-296: The dopamine_delta implementation wrongly uses only
vec[1] (attention) so changes to sensitivity have no effect; update the
dopamine_delta calculation to match the docstring: compute ((sensitivity +
attention) / 4) - (arousal / salience) by using the proper vec elements (e.g.,
vec[0] for sensitivity and vec[1] for attention and the appropriate vec indices
for arousal and salience) and replace the current single-index use of vec in
dopamine_delta; apply the same corrected formula wherever dopamine_delta is
computed in the nearby block (the code area around dopamine_delta and the
vec-based deltas).
- Around line 44-60: The shared _CATEGORICAL_TO_FLOAT table is being used for
all appraisal fields, allowing invalid cross-field strings (e.g., agency="low")
to be accepted; fix by defining per-field allowed categorical sets or per-field
mapping constants (e.g., ALLOWED_NOVELTY = {"unexpected","expected"},
ALLOWED_RELEVANCE = {...}, ALLOWED_AGENCY = {"self","other","circumstance"},
ALLOWED_COPING = {"high","low"}, ALLOWED_PLEASANTNESS =
{"pleasant","unpleasant"}) and change the conversion logic that currently reads
_CATEGORICAL_TO_FLOAT to first validate the input against the appropriate
per-field allowed set (by passing the appraisal field name or enum into the
converter), and if the value is not allowed raise a ValueError (or return a
clear error) instead of falling back to a generic mapping; update any usages of
_CATEGORICAL_TO_FLOAT to use the new per-field validators and mappings.
- Around line 253-277: The Hourglass mapping currently assigns novelty and
low-coping threats to the positive poles and ignores agency; fix by flipping the
sign of the threat and novelty contributions and by injecting a.agency into the
aptitude (or relevant axis) calculation: compute sensitivity_raw as the negative
of (1.0 - a.coping_potential) * a.goal_relevance * (1.0 - a.goal_congruence) so
threat maps to the negative sensitivity pole, compute attention using novelty
with a negative weight (e.g., -a.novelty * 0.6 + a.goal_relevance * 0.4) so
surprise/amazement sits on the negative attention pole, and add a.agency to the
aptitude mix (adjust weights in aptitude = a.coping_potential * w1 +
a.goal_congruence * w2 + a.agency * w3) before centering/scaling; update the
variables sensitivity, attention, aptitude (and keep pleasantness unchanged) so
the final FloatEmotion(...) uses these corrected values.
- Around line 253-278: Appraisal inputs are biased because you center results
after combining raw [0..1] values; instead subtract 0.5 from each appraisal
input (e.g. cp = a.coping_potential - 0.5, gr = a.goal_relevance - 0.5, gc =
a.goal_congruence - 0.5, nov = a.novelty - 0.5, ip = a.intrinsic_pleasantness -
0.5) and use those centered/signed variables when computing sensitivity,
attention, pleasantness and aptitude (compute sensitivity using “can't cope” as
-cp and “incongruent” as -gc as appropriate), then return those axis values
directly in FloatEmotion (remove the final centering step). Finally, update the
neutral test to assert all four axes are ~0 after this change.
In `@emotion_algebra/needs.py`:
- Around line 31-32: The code currently imports enum.StrEnum which is Python
3.11+ and breaks on 3.10; update needs.py to provide a compatibility fallback:
attempt to import StrEnum and if unavailable define a local StrEnum as a mixin
(class StrEnum(str, Enum): pass) before any enum definitions, then ensure the
exported enums CIADrive, MaxNeefNeed, and MurrayNeed inherit from this
compatibility StrEnum; alternatively, change requires-python to ">=3.11" if you
want to drop 3.10 support.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2d07d1e3-b81f-4c59-a228-4fe0eaf9b86f
📒 Files selected for processing (8)
README.mdemotion_algebra/__init__.pyemotion_algebra/appraisal.pyemotion_algebra/float_emotion.pyemotion_algebra/needs.pytest/test_appraisal.pytest/test_blend.pytest/test_needs.py
| # Mapping from categorical values to floats for continuous appraisal | ||
| _CATEGORICAL_TO_FLOAT: dict[str | None, float] = { | ||
| # Novelty | ||
| "unexpected": 1.0, "expected": 0.0, | ||
| # Relevance | ||
| "relevant": 1.0, "irrelevant": 0.0, | ||
| # Congruence | ||
| "congruent": 1.0, "incongruent": 0.0, | ||
| # Agency | ||
| "self": 1.0, "other": 0.5, "circumstance": 0.0, | ||
| # Coping | ||
| "high": 1.0, "low": 0.0, | ||
| # Intrinsic pleasantness | ||
| "pleasant": 1.0, "unpleasant": 0.0, | ||
| # None = neutral / unknown | ||
| None: 0.5, | ||
| } |
There was a problem hiding this comment.
Validate categorical values against the specific appraisal field.
Line 108 looks every string up in one shared table, so invalid cross-field values like agency="low" or goal_relevance="pleasant" are silently accepted and turned into real scores. That converts typos or bad upstream data into arbitrary appraisals instead of failing fast.
Also applies to: 96-117
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@emotion_algebra/appraisal.py` around lines 44 - 60, The shared
_CATEGORICAL_TO_FLOAT table is being used for all appraisal fields, allowing
invalid cross-field strings (e.g., agency="low") to be accepted; fix by defining
per-field allowed categorical sets or per-field mapping constants (e.g.,
ALLOWED_NOVELTY = {"unexpected","expected"}, ALLOWED_RELEVANCE = {...},
ALLOWED_AGENCY = {"self","other","circumstance"}, ALLOWED_COPING =
{"high","low"}, ALLOWED_PLEASANTNESS = {"pleasant","unpleasant"}) and change the
conversion logic that currently reads _CATEGORICAL_TO_FLOAT to first validate
the input against the appropriate per-field allowed set (by passing the
appraisal field name or enum into the converter), and if the value is not
allowed raise a ValueError (or return a clear error) instead of falling back to
a generic mapping; update any usages of _CATEGORICAL_TO_FLOAT to use the new
per-field validators and mappings.
| a = appraisal.to_float() | ||
|
|
||
| # Scherer SEC → Hourglass axes | ||
| # 1. Sensitivity: threat without coping capacity | ||
| # High when: relevant + incongruent + can't cope | ||
| sensitivity = (1.0 - a.coping_potential) * a.goal_relevance * (1.0 - a.goal_congruence) | ||
|
|
||
| # 2. Attention: novelty-driven engagement | ||
| # High when: unexpected + relevant | ||
| attention = a.novelty * 0.6 + a.goal_relevance * 0.4 | ||
|
|
||
| # 3. Pleasantness: hedonic valence | ||
| # Goal congruence (weighted by relevance) + intrinsic pleasantness | ||
| pleasantness = a.goal_congruence * a.goal_relevance * 0.7 + a.intrinsic_pleasantness * 0.3 | ||
|
|
||
| # 4. Aptitude: competence + alignment | ||
| # High when: can cope + goal-congruent | ||
| aptitude = a.coping_potential * 0.6 + a.goal_congruence * 0.4 | ||
|
|
||
| # Centre at 0 and scale to [-1, +1] | ||
| return FloatEmotion( | ||
| sensitivity=(sensitivity - 0.5) * 2.0, | ||
| attention=(attention - 0.5) * 2.0, | ||
| pleasantness=(pleasantness - 0.5) * 2.0, | ||
| aptitude=(aptitude - 0.5) * 2.0, |
There was a problem hiding this comment.
Preserve Hourglass pole direction and use agency in the continuous mapping.
Line 262 makes novelty increase positive attention even though surprise/amazement live on the negative attention pole, and Line 258 makes low-coping threat increase positive sensitivity even though fear/terror live on the negative sensitivity pole. a.agency is also never used, so self/other/circumstance appraisals collapse to the same vector despite _RULES distinguishing them.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@emotion_algebra/appraisal.py` around lines 253 - 277, The Hourglass mapping
currently assigns novelty and low-coping threats to the positive poles and
ignores agency; fix by flipping the sign of the threat and novelty contributions
and by injecting a.agency into the aptitude (or relevant axis) calculation:
compute sensitivity_raw as the negative of (1.0 - a.coping_potential) *
a.goal_relevance * (1.0 - a.goal_congruence) so threat maps to the negative
sensitivity pole, compute attention using novelty with a negative weight (e.g.,
-a.novelty * 0.6 + a.goal_relevance * 0.4) so surprise/amazement sits on the
negative attention pole, and add a.agency to the aptitude mix (adjust weights in
aptitude = a.coping_potential * w1 + a.goal_congruence * w2 + a.agency * w3)
before centering/scaling; update the variables sensitivity, attention, aptitude
(and keep pleasantness unchanged) so the final FloatEmotion(...) uses these
corrected values.
| a = appraisal.to_float() | ||
|
|
||
| # Scherer SEC → Hourglass axes | ||
| # 1. Sensitivity: threat without coping capacity | ||
| # High when: relevant + incongruent + can't cope | ||
| sensitivity = (1.0 - a.coping_potential) * a.goal_relevance * (1.0 - a.goal_congruence) | ||
|
|
||
| # 2. Attention: novelty-driven engagement | ||
| # High when: unexpected + relevant | ||
| attention = a.novelty * 0.6 + a.goal_relevance * 0.4 | ||
|
|
||
| # 3. Pleasantness: hedonic valence | ||
| # Goal congruence (weighted by relevance) + intrinsic pleasantness | ||
| pleasantness = a.goal_congruence * a.goal_relevance * 0.7 + a.intrinsic_pleasantness * 0.3 | ||
|
|
||
| # 4. Aptitude: competence + alignment | ||
| # High when: can cope + goal-congruent | ||
| aptitude = a.coping_potential * 0.6 + a.goal_congruence * 0.4 | ||
|
|
||
| # Centre at 0 and scale to [-1, +1] | ||
| return FloatEmotion( | ||
| sensitivity=(sensitivity - 0.5) * 2.0, | ||
| attention=(attention - 0.5) * 2.0, | ||
| pleasantness=(pleasantness - 0.5) * 2.0, | ||
| aptitude=(aptitude - 0.5) * 2.0, | ||
| ) |
There was a problem hiding this comment.
Neutral/unknown appraisals are biased negative right now.
Appraisal.to_float() defines missing values as 0.5, but Appraisal() currently lands at sensitivity = -0.75 and pleasantness ≈ -0.35 after centering. Partial or unknown appraisals therefore drift toward fear/sadness instead of staying near zero. Re-center the inputs before combining them, or compute each axis in signed space directly. Please also harden the neutral test to assert all four axes once this is fixed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@emotion_algebra/appraisal.py` around lines 253 - 278, Appraisal inputs are
biased because you center results after combining raw [0..1] values; instead
subtract 0.5 from each appraisal input (e.g. cp = a.coping_potential - 0.5, gr =
a.goal_relevance - 0.5, gc = a.goal_congruence - 0.5, nov = a.novelty - 0.5, ip
= a.intrinsic_pleasantness - 0.5) and use those centered/signed variables when
computing sensitivity, attention, pleasantness and aptitude (compute sensitivity
using “can't cope” as -cp and “incongruent” as -gc as appropriate), then return
those axis values directly in FloatEmotion (remove the final centering step).
Finally, update the neutral test to assert all four axes are ~0 after this
change.
| * Dopamine ← (sensitivity + attention) / 4 — arousal / salience | ||
| * Serotonin ← positive valence — (pleasantness + aptitude) / 4 | ||
| * Adrenaline ← negative valence — when pleasantness + aptitude < 0 |
There was a problem hiding this comment.
dopamine_delta drops half of the arousal signal this API describes.
The docstring and inline comment say dopamine comes from sensitivity + attention, but Line 328 uses vec[1] only. Changing sensitivity with fixed attention therefore has no effect on the returned deltas.
Suggested fix
- dopamine_delta = float(vec[1]) / 2.0 * scale # attention-driven
+ dopamine_delta = (float(vec[0]) + float(vec[1])) / 4.0 * scaleAlso applies to: 324-333
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@emotion_algebra/appraisal.py` around lines 294 - 296, The dopamine_delta
implementation wrongly uses only vec[1] (attention) so changes to sensitivity
have no effect; update the dopamine_delta calculation to match the docstring:
compute ((sensitivity + attention) / 4) - (arousal / salience) by using the
proper vec elements (e.g., vec[0] for sensitivity and vec[1] for attention and
the appropriate vec indices for arousal and salience) and replace the current
single-index use of vec in dopamine_delta; apply the same corrected formula
wherever dopamine_delta is computed in the nearby block (the code area around
dopamine_delta and the vec-based deltas).
| from enum import StrEnum | ||
| from typing import Optional, TYPE_CHECKING |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== existing StrEnum usage =="
rg -n '\bStrEnum\b' || true
echo
echo "== declared Python support =="
fd -HI '^(pyproject\.toml|setup\.cfg|setup\.py)$' -x sh -c '
echo "-- {} --"
rg -n "requires-python|python_requires|Programming Language :: Python :: 3\.[0-9]+" "{}" || true
'
echo
echo "== CI python versions =="
fd -HI '\.(yml|yaml)$' .github/workflows -x sh -c '
echo "-- {} --"
rg -n "python-version|3\.[0-9]+" "{}" || true
'Repository: TigreGotico/emotion-algebra
Length of output: 1135
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Full emotion_algebra/needs.py imports section =="
head -n 50 emotion_algebra/needs.py
echo
echo "== Search for StrEnum compatibility/fallback patterns =="
rg -n "try:|except|import.*compat|if.*version" emotion_algebra/needs.py
echo
echo "== Check for version_info checks in codebase =="
rg -n "sys\.version_info|version_info" emotion_algebra/ || true
echo
echo "== Search for StrEnum compatibility patterns globally =="
rg -n "compat.*StrEnum|StrEnum.*compat|fallback.*StrEnum" || trueRepository: TigreGotico/emotion-algebra
Length of output: 2130
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check for any compatibility module/utilities =="
fd -type f -name "*compat*" emotion_algebra/
echo
echo "== Search for conditional StrEnum imports or fallbacks anywhere =="
rg -n "try:|except ImportError|except ModuleNotFoundError" emotion_algebra/ --type py -A 2 || true
echo
echo "== Check what tests actually run on Python 3.10 =="
cat .github/workflows/python-support.yml
echo
echo "== Check the actual test files =="
fd -type f -name "test_needs.py" -o -name "test_*.py" | head -5 | xargs -I {} sh -c 'echo "-- {} --"; head -30 {}'Repository: TigreGotico/emotion-algebra
Length of output: 309
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check for any compatibility module/utilities =="
find emotion_algebra -name "*compat*" 2>/dev/null || echo "No compat files found"
echo
echo "== Search for conditional StrEnum imports or fallbacks =="
rg "try:|except ImportError|except ModuleNotFoundError" emotion_algebra/ -A 2 || true
echo
echo "== Check what tests actually run on Python 3.10 =="
cat .github/workflows/python-support.yml
echo
echo "== List test files =="
find . -name "test_needs.py" -o -name "*test*.py" | head -10Repository: TigreGotico/emotion-algebra
Length of output: 4436
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check if tests import emotion_algebra.needs or any of the StrEnum classes =="
rg "from emotion_algebra.needs|from emotion_algebra import|import emotion_algebra" test/ || true
echo
echo "== Check what's in __init__.py - what gets imported by default =="
cat emotion_algebra/__init__.py
echo
echo "== Verify StrEnum availability in Python 3.10 =="
python3 << 'EOF'
import sys
print(f"Python version: {sys.version_info}")
try:
from enum import StrEnum
print("StrEnum is available")
except ImportError:
print("StrEnum is NOT available")
EOFRepository: TigreGotico/emotion-algebra
Length of output: 6851
Fix Python 3.10 compatibility: StrEnum requires Python 3.11+.
The module publicly exports CIADrive, MaxNeefNeed, and MurrayNeed from emotion_algebra/__init__.py, all of which are StrEnum subclasses. Since enum.StrEnum was introduced in Python 3.11, importing emotion_algebra will fail immediately on Python 3.10, even though pyproject.toml declares requires-python = ">=3.10" and CI tests Python 3.10.
Either update requires-python to >=3.11, or add a compatibility fallback for Python 3.10 (e.g., Enum with str mixin or a conditional import).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@emotion_algebra/needs.py` around lines 31 - 32, The code currently imports
enum.StrEnum which is Python 3.11+ and breaks on 3.10; update needs.py to
provide a compatibility fallback: attempt to import StrEnum and if unavailable
define a local StrEnum as a mixin (class StrEnum(str, Enum): pass) before any
enum definitions, then ensure the exported enums CIADrive, MaxNeefNeed, and
MurrayNeed inherit from this compatibility StrEnum; alternatively, change
requires-python to ">=3.11" if you want to drop 3.10 support.
Summary
Appraisalnow accepts float values [0,1] alongside categorical strings. Newintrinsic_pleasantnessfield (Scherer's 6th SEC check).appraisal_to_float_emotion()maps continuous appraisal → 4D FloatEmotion via Scherer → Hourglass.float_emotion_to_neuro_deltas()provides inverse Lövheim mapping → (dopamine, serotonin, adrenaline).needs.pymodule withCIADrive,MaxNeefNeed(9),MurrayNeed(17) StrEnums. Maps each need to the Plutchik emotion that arises when deficient.need_deficit_to_emotion()andneed_deficit_to_float_emotion()functions.FloatEmotion.blend(joy, trust)→ pleasantness+aptitude quadrant). Supports custom weights and scaling.appraisal_to_emotion()unchanged.Test plan
python -m pytest test/→ 794 passed, 4 skipped🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
Documentation
New Features