This guide provides detailed instructions on how to create test data for BACnet Profiles to ensure the correctness of Codec functions.
Each Profile can contain its own test data:
profiles/
└── Vendor/
├── Vendor-Model.yaml # Profile file
└── tests/ # Test data directory
├── test-data.json # Test input (required)
└── expected-output.json # Expected output (optional, recommended)
Purpose: Define test input data
Contains:
- fPort (LoRaWAN port number)
- input (hexadecimal uplink data)
- Test case names and descriptions
Validation Behavior: Ensures Codec functions execute successfully without throwing exceptions
Purpose: Define expected decode output results
Contains:
- Complete data structure expected to be returned by each test case
Validation Behavior: Deep comparison of actual output against expected output, ensuring complete match
Why Recommended:
- ✅ Ensures output correctness, not just "no errors"
- ✅ Prevents regression: immediately detects output changes after code modifications
- ✅ Serves as documentation: clearly shows what each test data should decode to
mkdir -p profiles/Vendor/testsObtain uplink data from real devices and create the test input file:
{
"description": "Vendor-Model test data set",
"testCases": [
{
"name": "Normal working data",
"model": "LRS20100",
"fPort": 10,
"input": "040164010000000f41dc",
"description": "Temperature=25°C, Humidity=60%, Battery=100%"
},
{
"name": "Low temperature alert",
"model": "LRS20100",
"fPort": 10,
"input": "0801640100000000ffdc",
"description": "Temperature=-5°C, triggers low temperature alarm"
}
]
}Field Descriptions:
name(Required): Test case namemodel(Optional): Device model, used to distinguish test cases for different models- If
modelis specified, the test case will only run when validating the corresponding model Profile - If
modelis not specified, the test case applies to all models (generic test) - Model name is automatically extracted from the Profile filename (e.g.,
Senso8-LRS20100.yaml→LRS20100)
- If
fPort(Required): LoRaWAN port numberinput(Required): Uplink data in hexadecimal formatdescription(Optional): Test case description
Best Practices:
- ✅ Use real device data, do not fabricate
- ✅ Cover main scenarios: normal, boundary, exceptional
- ✅ Add clear descriptions explaining data meaning
- ✅ For multi-model scenarios, use
modelfield to distinguish test cases
node scripts/test-codec.js \
-f profiles/Vendor/Model.yaml \
-p 10 \
-u 040164010000000f41dcExample Output:
{
"data": [
{ "name": "Temperature", "channel": 1, "value": 25.0, "unit": "°C" },
{ "name": "Humidity", "channel": 2, "value": 60.0, "unit": "%" },
{ "name": "Battery", "channel": 3, "value": 100, "unit": "%" }
]
}After confirming the output is correct, create the expected output file:
{
"description": "Vendor-Model expected output",
"testCases": [
{
"name": "Normal working data",
"expectedOutput": [
{ "name": "Temperature", "channel": 1, "value": 25.0, "unit": "°C" },
{ "name": "Humidity", "channel": 2, "value": 60.0, "unit": "%" },
{ "name": "Battery", "channel": 3, "value": 100, "unit": "%" }
]
},
{
"name": "Low temperature alert",
"expectedOutput": [
{ "name": "Temperature", "channel": 1, "value": -5.0, "unit": "°C" },
{ "name": "Humidity", "channel": 2, "value": 60.0, "unit": "%" },
{ "name": "Battery", "channel": 3, "value": 100, "unit": "%" },
{ "name": "LowTempAlert", "channel": 4, "value": 1, "unit": null }
]
}
]
}Important Notes:
⚠️ expectedOutputis an array, directly corresponding to thedatafield⚠️ Test case order must exactly matchtest-data.json⚠️ Include all fields:name,channel,value,unit
node scripts/validate-profile.js profiles/Vendor/Model.yamlSuccess Output:
🧪 Running test data validation...
✓ Pass
Test result details:
✓ Normal working data [Output matched]
✓ Low temperature alert [Output matched]
======================================================================
✅ Validation passed
======================================================================
| Test File Configuration | Validation Behavior | Test Result | Recommendation |
|---|---|---|---|
Only test-data.json |
Only checks decode success | [Output not verified] |
|
| Both files present | Deep comparison of output | [Output matched] |
✅ Recommended |
The validation script performs strict deep comparison:
- ✅ Array length
- ✅ All fields in each object
- ✅ Field value types and values
- ✅ null and undefined
Expected Output:
[
{ "name": "Temperature", "channel": 1, "value": 25.0, "unit": "°C" }
]Actual Output:
[
{ "name": "Temperature", "channel": 1, "value": 25.1, "unit": "°C" }
]Result: ❌ Validation failed (value mismatch: 25.0 vs 25.1)
❌ Error: Expected and actual output field order differs
Cause: JavaScript object field order may vary
Solution: Deep comparison doesn't care about field order, only field existence and values. If error occurs, check for field name typos.
// Expected
{ "value": 25 }
// Actual
{ "value": 25.0 }Cause: In JavaScript, 25 and 25.0 are equal, but JSON serialization may differ in some cases
Solution: Consistently use float format (25.0) or integer format (25)
❌ Error: Test case 1 in test-data.json doesn't match test case 1 in expected-output.json
Cause: Test case order differs between the two files
Solution: Ensure the testCases array order is exactly the same in both files
// ❌ Incorrect format
{
"expectedOutput": {
"data": [...]
}
}
// ✅ Correct format
{
"expectedOutput": [...]
}Cause: expectedOutput should be a direct array, not wrapped in a data object
Solution: expectedOutput directly corresponds to the data field returned by decodeUplink
When multiple model Profiles exist under the same vendor directory, you can use the model field to distinguish and filter test cases.
Suitable for situations where:
- A vendor has multiple product models
- Different models use the same protocol but different data formats
- You need to manage test cases for all models in a single
testsdirectory
profiles/Senso8/
├── Senso8-LRS20100.yaml # Temperature & Humidity Sensor
├── Senso8-LRS20200.yaml # Temperature Sensor
├── Senso8-LRS20310.yaml # CO2 Sensor
├── Senso8-LRS20600.yaml # Door Sensor
└── tests/
├── test-data.json # Test inputs for all models
└── expected-output.json # Expected outputs for all models
test-data.json:
{
"description": "Senso8 Series Test Cases",
"testCases": [
{
"name": "LRS20100 Temperature & Humidity Test",
"model": "LRS20100",
"fPort": 10,
"input": "01016400e901ef00000000"
},
{
"name": "LRS20200 Temperature Test",
"model": "LRS20200",
"fPort": 10,
"input": "01010064000000000000"
},
{
"name": "Generic Battery Test",
"fPort": 10,
"input": "010164006400640000"
}
]
}expected-output.json:
{
"description": "Expected Outputs",
"testCases": [
{
"name": "LRS20100 Temperature & Humidity Test",
"model": "LRS20100",
"expectedOutput": [
{
"name": "Temperature",
"channel": 1,
"value": 23.3,
"unit": "°C"
},
{
"name": "Humidity",
"channel": 2,
"value": 49.5,
"unit": "%"
}
]
},
{
"name": "LRS20200 Temperature Test",
"model": "LRS20200",
"expectedOutput": [
{
"name": "Temperature",
"channel": 1,
"value": 10.0,
"unit": "°C"
}
]
}
]
}The validation script automatically extracts the model from the filename:
Senso8-LRS20100.yaml→ Model:LRS20100Senso8-LRS20200.yaml→ Model:LRS20200Dragino-LDS02.yaml→ Model:LDS02
When validating a specific Profile:
- Matching Model Tests - Run test cases where
modelfield matches the current model - Generic Tests - Run test cases without a
modelfield - Skip Other Models - Skip test cases with
modelfield for other models
Given the following test cases:
{
"testCases": [
{ "name": "Test A", "model": "LRS20100", ... },
{ "name": "Test B", "model": "LRS20200", ... },
{ "name": "Test C", ... } // No model field
]
}| Profile Being Validated | Test Cases Run |
|---|---|
Senso8-LRS20100.yaml |
Test A, Test C |
Senso8-LRS20200.yaml |
Test B, Test C |
Senso8-LRS20600.yaml |
Test C |
node scripts/validate-profile.js profiles/Senso8/Senso8-LRS20100.yamlOutput example:
🧪 Running test data validation...
Model detected: LRS20100
Running 2 of 5 test cases
✓ Pass
Test result details:
✓ LRS20100 Temperature & Humidity Test [LRS20100] [Output matched]
✓ Generic Battery Test [Output not verified]
Description:
Model detected: LRS20100- Automatically detected modelRunning 2 of 5 test cases- Ran 2 tests out of 5 total[LRS20100]- Displays the model this test case belongs to
Include model information in test case names:
{
"name": "LRS20100 Normal Temperature & Humidity Test",
"model": "LRS20100",
...
}For functionality applicable to all models (like battery level, button), don't specify the model field:
{
"name": "Battery Level Test",
// No model field, applies to all models
"fPort": 10,
"input": "..."
}{
"testCases": [
// Basic functionality tests
{ "name": "LRS20100 Basic Data Report", "model": "LRS20100", ... },
{ "name": "LRS20200 Basic Data Report", "model": "LRS20200", ... },
// Alarm tests
{ "name": "LRS20100 High Temperature Alarm", "model": "LRS20100", ... },
{ "name": "LRS20100 Low Temperature Alarm", "model": "LRS20100", ... },
// Generic tests
{ "name": "Generic Battery Level Test", ... },
{ "name": "Generic Button Test", ... }
]
}Cause:
- Typo in
modelfield - Filename doesn't follow
Vendor-Model.yamlformat
Solution:
- Check if
modelfield matches the model in filename - Ensure filename format is
Vendor-Model.yaml
Cause:
- All test cases have
modelspecified, but none match the current model
Solution:
- Check the
modelfield of test cases - Or remove
modelfield to make them generic tests
Validation script outputs:
Model detected: null
Running 5 of 5 test cases
Cause:
- Filename doesn't follow
Vendor-Model.yamlformat
Solution:
- Rename file to standard format, e.g.,
Senso8-LRS20100.yaml
{
"testCases": [
{ "name": "Normal working data", ... },
{ "name": "Boundary value - Maximum temperature", ... },
{ "name": "Boundary value - Minimum temperature", ... },
{ "name": "Alarm triggered", ... },
{ "name": "Low battery", ... },
{ "name": "Sensor disconnected", ... }
]
}✅ Good Naming:
- "Normal temperature humidity data - 25°C, 60%"
- "Temperature sensor disconnect alarm"
- "Low battery warning - Battery 10%"
❌ Poor Naming:
- "Test1"
- "test"
- "data"
{
"name": "High temperature alarm",
"fPort": 10,
"input": "0801E40308009C0000640A",
"description": "Temperature=45°C, triggers high temperature alarm (threshold 40°C), Humidity=50%, Battery=100%"
}# Obtain real uplink data from RAK gateway
# ChirpStack log example:
# Uplink: {"data":"040164010000000f41dc","fPort":10}Run tests regularly to ensure expected output matches actual behavior:
# Run after each Codec modification
node scripts/validate-profile.js profiles/Vendor/Model.yamlValidation failures automatically display differences:
✗ Normal data: Output does not match expected result
Expected output:
[
{ "name": "Temperature", "value": 25.0 }
]
Actual output:
[
{ "name": "Temperature", "value": 25.1 }
]
Progress from simple to complex:
// Step 1: Simplest case
{ "testCases": [
{ "name": "Basic data", ... }
]}
// Step 2: Add boundary cases
{ "testCases": [
{ "name": "Basic data", ... },
{ "name": "Maximum value", ... },
{ "name": "Minimum value", ... }
]}# Validate JSON format is correct
cat profiles/Vendor/tests/test-data.json | jq .
cat profiles/Vendor/tests/expected-output.json | jq .View examples in the project:
examples/minimal-profile/tests/- Minimal exampleexamples/standard-profile/tests/- Complete example
Confirm before submitting Profile:
- Created
tests/test-data.json - Created
tests/expected-output.json - Contains at least 2-3 test cases
- Test data comes from real devices
- Test case order is consistent in both files
- All tests pass when running
validate-profile.js - All tests show
[Output matched]
Last Updated: 2025-10-23