Compare commits

..

1 Commits

Author SHA1 Message Date
Jesse Hills
4f67d25506 [validation] Store version extra separator 2025-07-29 21:05:27 +12:00
5583 changed files with 51441 additions and 146940 deletions

View File

@@ -9,7 +9,7 @@ This document provides essential context for AI models interacting with this pro
## 2. Core Technologies & Stack
* **Languages:** Python (>=3.11), C++ (gnu++20)
* **Languages:** Python (>=3.10), C++ (gnu++20)
* **Frameworks & Runtimes:** PlatformIO, Arduino, ESP-IDF.
* **Build Systems:** PlatformIO is the primary build system. CMake is used as an alternative.
* **Configuration:** YAML.
@@ -38,7 +38,7 @@ This document provides essential context for AI models interacting with this pro
5. **Dashboard** (`esphome/dashboard/`): A web-based interface for device configuration, management, and OTA updates.
* **Platform Support:**
1. **ESP32** (`components/esp32/`): Espressif ESP32 family. Supports multiple variants (Original, C2, C3, C5, C6, H2, P4, S2, S3) with ESP-IDF framework. Arduino framework supports only a subset of the variants (Original, C3, S2, S3).
1. **ESP32** (`components/esp32/`): Espressif ESP32 family. Supports multiple variants (S2, S3, C3, etc.) and both IDF and Arduino frameworks.
2. **ESP8266** (`components/esp8266/`): Espressif ESP8266. Arduino framework only, with memory constraints.
3. **RP2040** (`components/rp2040/`): Raspberry Pi Pico/RP2040. Arduino framework with PIO (Programmable I/O) support.
4. **LibreTiny** (`components/libretiny/`): Realtek and Beken chips. Supports multiple chip families and auto-generated components.
@@ -51,79 +51,7 @@ This document provides essential context for AI models interacting with this pro
* **Naming Conventions:**
* **Python:** Follows PEP 8. Use clear, descriptive names following snake_case.
* **C++:** Follows the Google C++ Style Guide with these specifics (following clang-tidy conventions):
- Function, method, and variable names: `lower_snake_case`
- Class/struct/enum names: `UpperCamelCase`
- Top-level constants (global/namespace scope): `UPPER_SNAKE_CASE`
- Function-local constants: `lower_snake_case`
- Protected/private fields: `lower_snake_case_with_trailing_underscore_`
- Favor descriptive names over abbreviations
* **C++ Field Visibility:**
* **Prefer `protected`:** Use `protected` for most class fields to enable extensibility and testing. Fields should be `lower_snake_case_with_trailing_underscore_`.
* **Use `private` for safety-critical cases:** Use `private` visibility when direct field access could introduce bugs or violate invariants:
1. **Pointer lifetime issues:** When setters validate and store pointers from known lists to prevent dangling references.
```cpp
// Helper to find matching string in vector and return its pointer
inline const char *vector_find(const std::vector<const char *> &vec, const char *value) {
for (const char *item : vec) {
if (strcmp(item, value) == 0)
return item;
}
return nullptr;
}
class ClimateDevice {
public:
void set_custom_fan_modes(std::initializer_list<const char *> modes) {
this->custom_fan_modes_ = modes;
this->active_custom_fan_mode_ = nullptr; // Reset when modes change
}
bool set_custom_fan_mode(const char *mode) {
// Find mode in supported list and store that pointer (not the input pointer)
const char *validated_mode = vector_find(this->custom_fan_modes_, mode);
if (validated_mode != nullptr) {
this->active_custom_fan_mode_ = validated_mode;
return true;
}
return false;
}
private:
std::vector<const char *> custom_fan_modes_; // Pointers to string literals in flash
const char *active_custom_fan_mode_{nullptr}; // Must point to entry in custom_fan_modes_
};
```
2. **Invariant coupling:** When multiple fields must remain synchronized to prevent buffer overflows or data corruption.
```cpp
class Buffer {
public:
void resize(size_t new_size) {
auto new_data = std::make_unique<uint8_t[]>(new_size);
if (this->data_) {
std::memcpy(new_data.get(), this->data_.get(), std::min(this->size_, new_size));
}
this->data_ = std::move(new_data);
this->size_ = new_size; // Must stay in sync with data_
}
private:
std::unique_ptr<uint8_t[]> data_;
size_t size_{0}; // Must match allocated size of data_
};
```
3. **Resource management:** When setters perform cleanup or registration operations that derived classes might skip.
* **Provide `protected` accessor methods:** When derived classes need controlled access to `private` members.
* **C++ Preprocessor Directives:**
* **Avoid `#define` for constants:** Using `#define` for constants is discouraged and should be replaced with `const` variables or enums.
* **Use `#define` only for:**
- Conditional compilation (`#ifdef`, `#ifndef`)
- Compile-time sizes calculated during Python code generation (e.g., configuring `std::array` or `StaticVector` dimensions via `cg.add_define()`)
* **C++ Additional Conventions:**
* **Member access:** Prefix all class member access with `this->` (e.g., `this->value_` not `value_`)
* **Indentation:** Use spaces (two per indentation level), not tabs
* **Type aliases:** Prefer `using type_t = int;` over `typedef int type_t;`
* **Line length:** Wrap lines at no more than 120 characters
* **C++:** Follows the Google C++ Style Guide.
* **Component Structure:**
* **Standard Files:**
@@ -132,7 +60,7 @@ This document provides essential context for AI models interacting with this pro
├── __init__.py # Component configuration schema and code generation
├── [component].h # C++ header file (if needed)
├── [component].cpp # C++ implementation (if needed)
└── [platform]/ # Platform-specific implementations
└── [platform]/ # Platform-specific implementations
├── __init__.py # Platform-specific configuration
├── [platform].h # Platform C++ header
└── [platform].cpp # Platform C++ implementation
@@ -172,7 +100,8 @@ This document provides essential context for AI models interacting with this pro
* **C++ Class Pattern:**
```cpp
namespace esphome::my_component {
namespace esphome {
namespace my_component {
class MyComponent : public Component {
public:
@@ -188,7 +117,8 @@ This document provides essential context for AI models interacting with this pro
int param_{0};
};
} // namespace esphome::my_component
} // namespace my_component
} // namespace esphome
```
* **Common Component Examples:**
@@ -220,8 +150,7 @@ This document provides essential context for AI models interacting with this pro
* **Configuration Validation:**
* **Common Validators:** `cv.int_`, `cv.float_`, `cv.string`, `cv.boolean`, `cv.int_range(min=0, max=100)`, `cv.positive_int`, `cv.percentage`.
* **Complex Validation:** `cv.All(cv.string, cv.Length(min=1, max=50))`, `cv.Any(cv.int_, cv.string)`.
* **Platform-Specific:** `cv.only_on(["esp32", "esp8266"])`, `esp32.only_on_variant(...)`, `cv.only_on_esp32`, `cv.only_on_esp8266`, `cv.only_on_rp2040`.
* **Framework-Specific:** `cv.only_with_framework(...)`, `cv.only_with_arduino`, `cv.only_with_esp_idf`.
* **Platform-Specific:** `cv.only_on(["esp32", "esp8266"])`, `cv.only_with_arduino`.
* **Schema Extensions:**
```python
CONFIG_SCHEMA = cv.Schema({ ... })
@@ -239,8 +168,6 @@ This document provides essential context for AI models interacting with this pro
* `platformio.ini`: Configures the PlatformIO build environments for different microcontrollers.
* `.pre-commit-config.yaml`: Configures the pre-commit hooks for linting and formatting.
* **CI/CD Pipeline:** Defined in `.github/workflows`.
* **Static Analysis & Development:**
* `esphome/core/defines.h`: A comprehensive header file containing all `#define` directives that can be added by components using `cg.add_define()` in Python. This file is used exclusively for development, static analysis tools, and CI testing - it is not used during runtime compilation. When developing components that add new defines, they must be added to this file to ensure proper IDE support and static analysis coverage. The file includes feature flags, build configurations, and platform-specific defines that help static analyzers understand the complete codebase without needing to compile for specific platforms.
## 6. Development & Testing Workflow
@@ -256,11 +183,6 @@ This document provides essential context for AI models interacting with this pro
└── components/[component]/ # Component-specific tests
```
Run them using `script/test_build_components`. Use `-c <component>` to test specific components and `-t <target>` for specific platforms.
* **Testing All Components Together:** To verify that all components can be tested together without ID conflicts or configuration issues, use:
```bash
./script/test_component_grouping.py -e config --all
```
This tests all components in a single build to catch conflicts that might not appear when testing components individually. Use `-e config` for fast configuration validation, or `-e compile` for full compilation testing.
* **Debugging and Troubleshooting:**
* **Debug Tools:**
- `esphome config <file>.yaml` to validate configuration.
@@ -276,12 +198,12 @@ This document provides essential context for AI models interacting with this pro
## 7. Specific Instructions for AI Collaboration
* **Contribution Workflow (Pull Request Process):**
1. **Fork & Branch:** Create a new branch based on the `dev` branch (always use `git checkout -b <branch-name> dev` to ensure you're branching from `dev`, not the currently checked out branch).
1. **Fork & Branch:** Create a new branch in your fork.
2. **Make Changes:** Adhere to all coding conventions and patterns.
3. **Test:** Create component tests for all supported platforms and run the full test suite locally.
4. **Lint:** Run `pre-commit` to ensure code is compliant.
5. **Commit:** Commit your changes. There is no strict format for commit messages.
6. **Pull Request:** Submit a PR against the `dev` branch. The Pull Request title should have a prefix of the component being worked on (e.g., `[display] Fix bug`, `[abc123] Add new component`). Update documentation, examples, and add `CODEOWNERS` entries as needed. Pull requests should always be made using the `.github/PULL_REQUEST_TEMPLATE.md` template - fill out all sections completely without removing any parts of the template.
6. **Pull Request:** Submit a PR against the `dev` branch. The Pull Request title should have a prefix of the component being worked on (e.g., `[display] Fix bug`, `[abc123] Add new component`). Update documentation, examples, and add `CODEOWNERS` entries as needed. Pull requests should always be made with the PULL_REQUEST_TEMPLATE.md template filled out correctly.
* **Documentation Contributions:**
* Documentation is hosted in the separate `esphome/esphome-docs` repository.
@@ -291,161 +213,6 @@ This document provides essential context for AI models interacting with this pro
* **Component Development:** Keep dependencies minimal, provide clear error messages, and write comprehensive docstrings and tests.
* **Code Generation:** Generate minimal and efficient C++ code. Validate all user inputs thoroughly. Support multiple platform variations.
* **Configuration Design:** Aim for simplicity with sensible defaults, while allowing for advanced customization.
* **Embedded Systems Optimization:** ESPHome targets resource-constrained microcontrollers. Be mindful of flash size and RAM usage.
**Why Heap Allocation Matters:**
ESP devices run for months with small heaps shared between Wi-Fi, BLE, LWIP, and application code. Over time, repeated allocations of different sizes fragment the heap. Failures happen when the largest contiguous block shrinks, even if total free heap is still large. We have seen field crashes caused by this.
**Heap allocation after `setup()` should be avoided unless absolutely unavoidable.** Every allocation/deallocation cycle contributes to fragmentation. ESPHome treats runtime heap allocation as a long-term reliability bug, not a performance issue. Helpers that hide allocation (`std::string`, `std::to_string`, string-returning helpers) are being deprecated and replaced with buffer and view based APIs.
**STL Container Guidelines:**
ESPHome runs on embedded systems with limited resources. Choose containers carefully:
1. **Compile-time-known sizes:** Use `std::array` instead of `std::vector` when size is known at compile time.
```cpp
// Bad - generates STL realloc code
std::vector<int> values;
// Good - no dynamic allocation
std::array<int, MAX_VALUES> values;
```
Use `cg.add_define("MAX_VALUES", count)` to set the size from Python configuration.
**For byte buffers:** Avoid `std::vector<uint8_t>` unless the buffer needs to grow. Use `std::unique_ptr<uint8_t[]>` instead.
> **Note:** `std::unique_ptr<uint8_t[]>` does **not** provide bounds checking or iterator support like `std::vector<uint8_t>`. Use it only when you do not need these features and want minimal overhead.
```cpp
// Bad - STL overhead for simple byte buffer
std::vector<uint8_t> buffer;
buffer.resize(256);
// Good - minimal overhead, single allocation
std::unique_ptr<uint8_t[]> buffer = std::make_unique<uint8_t[]>(256);
// Or if size is constant:
std::array<uint8_t, 256> buffer;
```
2. **Compile-time-known fixed sizes with vector-like API:** Use `StaticVector` from `esphome/core/helpers.h` for compile-time fixed size with `push_back()` interface (no dynamic allocation).
```cpp
// Bad - generates STL realloc code (_M_realloc_insert)
std::vector<ServiceRecord> services;
services.reserve(5); // Still includes reallocation machinery
// Good - compile-time fixed size, no dynamic allocation
StaticVector<ServiceRecord, MAX_SERVICES> services;
services.push_back(record1);
```
Use `cg.add_define("MAX_SERVICES", count)` to set the size from Python configuration.
Like `std::array` but with vector-like API (`push_back()`, `size()`) and no STL reallocation code.
3. **Runtime-known sizes:** Use `FixedVector` from `esphome/core/helpers.h` when the size is only known at runtime initialization.
```cpp
// Bad - generates STL realloc code (_M_realloc_insert)
std::vector<TxtRecord> txt_records;
txt_records.reserve(5); // Still includes reallocation machinery
// Good - runtime size, single allocation, no reallocation machinery
FixedVector<TxtRecord> txt_records;
txt_records.init(record_count); // Initialize with exact size at runtime
```
**Benefits:**
- Eliminates `_M_realloc_insert`, `_M_default_append` template instantiations (saves 200-500 bytes per instance)
- Single allocation, no upper bound needed
- No reallocation overhead
- Compatible with protobuf code generation when using `[(fixed_vector) = true]` option
4. **Small datasets (1-16 elements):** Use `std::vector` or `std::array` with simple structs instead of `std::map`/`std::set`/`std::unordered_map`.
```cpp
// Bad - 2KB+ overhead for red-black tree/hash table
std::map<std::string, int> small_lookup;
std::unordered_map<int, std::string> tiny_map;
// Good - simple struct with linear search (std::vector is fine)
struct LookupEntry {
const char *key;
int value;
};
std::vector<LookupEntry> small_lookup = {
{"key1", 10},
{"key2", 20},
{"key3", 30},
};
// Or std::array if size is compile-time constant:
// std::array<LookupEntry, 3> small_lookup = {{ ... }};
```
Linear search on small datasets (1-16 elements) is often faster than hashing/tree overhead, but this depends on lookup frequency and access patterns. For frequent lookups in hot code paths, the O(1) vs O(n) complexity difference may still matter even for small datasets. `std::vector` with simple structs is usually fine—it's the heavy containers (`map`, `set`, `unordered_map`) that should be avoided for small datasets unless profiling shows otherwise.
5. **Avoid `std::deque`:** It allocates in 512-byte blocks regardless of element size, guaranteeing at least 512 bytes of RAM usage immediately. This is a major source of crashes on memory-constrained devices.
6. **Detection:** Look for these patterns in compiler output:
- Large code sections with STL symbols (vector, map, set)
- `alloc`, `realloc`, `dealloc` in symbol names
- `_M_realloc_insert`, `_M_default_append` (vector reallocation)
- Red-black tree code (`rb_tree`, `_Rb_tree`)
- Hash table infrastructure (`unordered_map`, `hash`)
**Prioritize optimization effort for:**
- Core components (API, network, logger)
- Widely-used components (mdns, wifi, ble)
- Components causing flash size complaints
Note: Avoiding heap allocation after `setup()` is always required regardless of component type. The prioritization above is about the effort spent on container optimization (e.g., migrating from `std::vector` to `StaticVector`).
* **State Management:** Use `CORE.data` for component state that needs to persist during configuration generation. Avoid module-level mutable globals.
**Bad Pattern (Module-Level Globals):**
```python
# Don't do this - state persists between compilation runs
_component_state = []
_use_feature = None
def enable_feature():
global _use_feature
_use_feature = True
```
**Bad Pattern (Flat Keys):**
```python
# Don't do this - keys should be namespaced under component domain
MY_FEATURE_KEY = "my_component_feature"
CORE.data[MY_FEATURE_KEY] = True
```
**Good Pattern (dataclass):**
```python
from dataclasses import dataclass, field
from esphome.core import CORE
DOMAIN = "my_component"
@dataclass
class MyComponentData:
feature_enabled: bool = False
item_count: int = 0
items: list[str] = field(default_factory=list)
def _get_data() -> MyComponentData:
if DOMAIN not in CORE.data:
CORE.data[DOMAIN] = MyComponentData()
return CORE.data[DOMAIN]
def request_feature() -> None:
_get_data().feature_enabled = True
def add_item(item: str) -> None:
_get_data().items.append(item)
```
If you need a real-world example, search for components that use `@dataclass` with `CORE.data` in the codebase. Note: Some components may use `TypedDict` for dictionary-based storage; both patterns are acceptable depending on your needs.
**Why this matters:**
- Module-level globals persist between compilation runs if the dashboard doesn't fork/exec
- `CORE.data` automatically clears between runs
- Namespacing under `DOMAIN` prevents key collisions between components
- `@dataclass` provides type safety and cleaner attribute access
* **Security:** Be mindful of security when making changes to the API, web server, or any other network-related code. Do not hardcode secrets or keys.
@@ -453,45 +220,3 @@ This document provides essential context for AI models interacting with this pro
* **Python:** When adding a new Python dependency, add it to the appropriate `requirements*.txt` file and `pyproject.toml`.
* **C++ / PlatformIO:** When adding a new C++ dependency, add it to `platformio.ini` and use `cg.add_library`.
* **Build Flags:** Use `cg.add_build_flag(...)` to add compiler flags.
## 8. Public API and Breaking Changes
* **Public C++ API:**
* **Components**: Only documented features at [esphome.io](https://esphome.io) are public API. Undocumented `public` members are internal.
* **Core/Base Classes** (`esphome/core/`, `Component`, `Sensor`, etc.): All `public` members are public API.
* **Components with Global Accessors** (`global_api_server`, etc.): All `public` members are public API (except config setters).
* **Public Python API:**
* All documented configuration options at [esphome.io](https://esphome.io) are public API.
* Python code in `esphome/core/` actively used by existing core components is considered stable API.
* Other Python code is internal unless explicitly documented for external component use.
* **Breaking Changes Policy:**
* Aim for **6-month deprecation window** when possible
* Clean breaks allowed for: signature changes, deep refactorings, resource constraints
* Must document migration path in PR description (generates release notes)
* Blog post required for core/base class changes or significant architectural changes
* Full details: https://developers.esphome.io/contributing/code/#public-api-and-breaking-changes
* **Breaking Change Checklist:**
- [ ] Clear justification (RAM/flash savings, architectural improvement)
- [ ] Explored non-breaking alternatives
- [ ] Added deprecation warnings if possible (use `ESPDEPRECATED` macro for C++)
- [ ] Documented migration path in PR description with before/after examples
- [ ] Updated all internal usage and esphome-docs
- [ ] Tested backward compatibility during deprecation period
* **Deprecation Pattern (C++):**
```cpp
// Remove before 2026.6.0
ESPDEPRECATED("Use new_method() instead. Removed in 2026.6.0", "2025.12.0")
void old_method() { this->new_method(); }
```
* **Deprecation Pattern (Python):**
```python
# Remove before 2026.6.0
if CONF_OLD_KEY in config:
_LOGGER.warning(f"'{CONF_OLD_KEY}' deprecated, use '{CONF_NEW_KEY}'. Removed in 2026.6.0")
config[CONF_NEW_KEY] = config.pop(CONF_OLD_KEY) # Auto-migrate
```

View File

@@ -1 +1 @@
d272a88e8ca28ae9340a9a03295a566432a52cb696501908f57764475bf7ca65
f84518ea4140c194b21cc516aae05aaa0cf876ab866f89e22e91842df46333ed

View File

@@ -1,96 +0,0 @@
---
name: pr-workflow
description: Create pull requests for esphome. Use when creating PRs, submitting changes, or preparing contributions.
allowed-tools: Read, Bash, Glob, Grep
---
# ESPHome PR Workflow
When creating a pull request for esphome, follow these steps:
## 1. Create Branch from Upstream
Always base your branch on **upstream** (not origin/fork) to ensure you have the latest code:
```bash
git fetch upstream
git checkout -b <branch-name> upstream/dev
```
## 2. Read the PR Template
Before creating a PR, read `.github/PULL_REQUEST_TEMPLATE.md` to understand required fields.
## 3. Create the PR
Use `gh pr create` with the **full template** filled in. Never skip or abbreviate sections.
Required fields:
- **What does this implement/fix?**: Brief description of changes
- **Types of changes**: Check ONE appropriate box (Bugfix, New feature, Breaking change, etc.)
- **Related issue**: Use `fixes <link>` syntax if applicable
- **Pull request in esphome-docs**: Link if docs are needed
- **Test Environment**: Check platforms you tested on
- **Example config.yaml**: Include working example YAML
- **Checklist**: Verify code is tested and tests added
## 4. Example PR Body
```markdown
# What does this implement/fix?
<describe your changes here>
## Types of changes
- [ ] Bugfix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Developer breaking change (an API change that could break external components)
- [ ] Code quality improvements to existing code or addition of tests
- [ ] Other
**Related issue or feature (if applicable):**
- fixes https://github.com/esphome/esphome/issues/XXX
**Pull request in [esphome-docs](https://github.com/esphome/esphome-docs) with documentation (if applicable):**
- esphome/esphome-docs#XXX
## Test Environment
- [x] ESP32
- [x] ESP32 IDF
- [ ] ESP8266
- [ ] RP2040
- [ ] BK72xx
- [ ] RTL87xx
- [ ] LN882x
- [ ] nRF52840
## Example entry for `config.yaml`:
```yaml
# Example config.yaml
component_name:
id: my_component
option: value
```
## Checklist:
- [x] The code change is tested and works locally.
- [x] Tests have been added to verify that the new code works (under `tests/` folder).
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated in [esphome-docs](https://github.com/esphome/esphome-docs).
```
## 5. Push and Create PR
```bash
git push -u origin <branch-name>
gh pr create --repo esphome/esphome --base dev --title "[component] Brief description"
```
Title should be prefixed with the component name in brackets, e.g. `[safe_mode] Add feature`.

View File

@@ -1,5 +1,4 @@
[run]
omit =
esphome/components/*
esphome/analyze_memory/*
tests/integration/*

View File

@@ -7,7 +7,6 @@
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Developer breaking change (an API change that could break external components)
- [ ] Code quality improvements to existing code or addition of tests
- [ ] Other
@@ -27,7 +26,6 @@
- [ ] RP2040
- [ ] BK72xx
- [ ] RTL87xx
- [ ] LN882x
- [ ] nRF52840
## Example entry for `config.yaml`:

View File

@@ -47,7 +47,7 @@ runs:
- name: Build and push to ghcr by digest
id: build-ghcr
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
uses: docker/build-push-action@v6.18.0
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
@@ -73,7 +73,7 @@ runs:
- name: Build and push to dockerhub by digest
id: build-dockerhub
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
uses: docker/build-push-action@v6.18.0
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false

View File

@@ -17,12 +17,12 @@ runs:
steps:
- name: Set up Python ${{ inputs.python-version }}
id: python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: ${{ inputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache/restore@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache/restore@v4.2.3
with:
path: venv
# yamllint disable-line rule:line-length

View File

@@ -22,17 +22,17 @@ jobs:
if: github.event.action != 'labeled' || github.event.sender.type != 'Bot'
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
- name: Auto Label PR
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
@@ -53,7 +53,6 @@ jobs:
'new-target-platform',
'merging-to-release',
'merging-to-beta',
'chained-pr',
'core',
'small-pr',
'dashboard',
@@ -64,12 +63,7 @@ jobs:
'needs-docs',
'needs-codeowners',
'too-big',
'labeller-recheck',
'bugfix',
'new-feature',
'breaking-change',
'developer-breaking-change',
'code-quality'
'labeller-recheck'
];
const DOCS_PR_PATTERNS = [
@@ -107,9 +101,7 @@ jobs:
// Calculate data from PR files
const changedFiles = prFiles.map(file => file.filename);
const totalAdditions = prFiles.reduce((sum, file) => sum + (file.additions || 0), 0);
const totalDeletions = prFiles.reduce((sum, file) => sum + (file.deletions || 0), 0);
const totalChanges = totalAdditions + totalDeletions;
const totalChanges = prFiles.reduce((sum, file) => sum + (file.additions || 0) + (file.deletions || 0), 0);
console.log('Current labels:', currentLabels.join(', '));
console.log('Changed files:', changedFiles.length);
@@ -142,8 +134,6 @@ jobs:
labels.add('merging-to-release');
} else if (baseRef === 'beta') {
labels.add('merging-to-beta');
} else if (baseRef !== 'dev') {
labels.add('chained-pr');
}
return labels;
@@ -237,21 +227,16 @@ jobs:
// Strategy: PR size detection
async function detectPRSize() {
const labels = new Set();
const testChanges = prFiles
.filter(file => file.filename.startsWith('tests/'))
.reduce((sum, file) => sum + (file.additions || 0) + (file.deletions || 0), 0);
const nonTestChanges = totalChanges - testChanges;
if (totalChanges <= SMALL_PR_THRESHOLD) {
labels.add('small-pr');
return labels;
}
const testAdditions = prFiles
.filter(file => file.filename.startsWith('tests/'))
.reduce((sum, file) => sum + (file.additions || 0), 0);
const testDeletions = prFiles
.filter(file => file.filename.startsWith('tests/'))
.reduce((sum, file) => sum + (file.deletions || 0), 0);
const nonTestChanges = (totalAdditions - testAdditions) - (totalDeletions - testDeletions);
// Don't add too-big if mega-pr label is already present
if (nonTestChanges > TOO_BIG_THRESHOLD && !isMegaPR) {
labels.add('too-big');
@@ -356,43 +341,17 @@ jobs:
return labels;
}
// Strategy: PR Template Checkbox detection
async function detectPRTemplateCheckboxes() {
const labels = new Set();
const prBody = context.payload.pull_request.body || '';
console.log('Checking PR template checkboxes...');
// Check for checked checkboxes in the "Types of changes" section
const checkboxPatterns = [
{ pattern: /- \[x\] Bugfix \(non-breaking change which fixes an issue\)/i, label: 'bugfix' },
{ pattern: /- \[x\] New feature \(non-breaking change which adds functionality\)/i, label: 'new-feature' },
{ pattern: /- \[x\] Breaking change \(fix or feature that would cause existing functionality to not work as expected\)/i, label: 'breaking-change' },
{ pattern: /- \[x\] Developer breaking change \(an API change that could break external components\)/i, label: 'developer-breaking-change' },
{ pattern: /- \[x\] Code quality improvements to existing code or addition of tests/i, label: 'code-quality' }
];
for (const { pattern, label } of checkboxPatterns) {
if (pattern.test(prBody)) {
console.log(`Found checked checkbox for: ${label}`);
labels.add(label);
}
}
return labels;
}
// Strategy: Requirements detection
async function detectRequirements(allLabels) {
const labels = new Set();
// Check for missing tests
if ((allLabels.has('new-component') || allLabels.has('new-platform') || allLabels.has('new-feature')) && !allLabels.has('has-tests')) {
if ((allLabels.has('new-component') || allLabels.has('new-platform')) && !allLabels.has('has-tests')) {
labels.add('needs-tests');
}
// Check for missing docs
if (allLabels.has('new-component') || allLabels.has('new-platform') || allLabels.has('new-feature')) {
if (allLabels.has('new-component') || allLabels.has('new-platform')) {
const prBody = context.payload.pull_request.body || '';
const hasDocsLink = DOCS_PR_PATTERNS.some(pattern => pattern.test(prBody));
@@ -418,29 +377,26 @@ jobs:
}
// Generate review messages
function generateReviewMessages(finalLabels, originalLabelCount) {
function generateReviewMessages(finalLabels) {
const messages = [];
const prAuthor = context.payload.pull_request.user.login;
// Too big message
if (finalLabels.includes('too-big')) {
const testAdditions = prFiles
const testChanges = prFiles
.filter(file => file.filename.startsWith('tests/'))
.reduce((sum, file) => sum + (file.additions || 0), 0);
const testDeletions = prFiles
.filter(file => file.filename.startsWith('tests/'))
.reduce((sum, file) => sum + (file.deletions || 0), 0);
const nonTestChanges = (totalAdditions - testAdditions) - (totalDeletions - testDeletions);
.reduce((sum, file) => sum + (file.additions || 0) + (file.deletions || 0), 0);
const nonTestChanges = totalChanges - testChanges;
const tooManyLabels = originalLabelCount > MAX_LABELS;
const tooManyLabels = finalLabels.length > MAX_LABELS;
const tooManyChanges = nonTestChanges > TOO_BIG_THRESHOLD;
let message = `${TOO_BIG_MARKER}\n### 📦 Pull Request Size\n\n`;
if (tooManyLabels && tooManyChanges) {
message += `This PR is too large with ${nonTestChanges} line changes (excluding tests) and affects ${originalLabelCount} different components/areas.`;
message += `This PR is too large with ${nonTestChanges} line changes (excluding tests) and affects ${finalLabels.length} different components/areas.`;
} else if (tooManyLabels) {
message += `This PR affects ${originalLabelCount} different components/areas.`;
message += `This PR affects ${finalLabels.length} different components/areas.`;
} else {
message += `This PR is too large with ${nonTestChanges} line changes (excluding tests).`;
}
@@ -468,8 +424,8 @@ jobs:
}
// Handle reviews
async function handleReviews(finalLabels, originalLabelCount) {
const reviewMessages = generateReviewMessages(finalLabels, originalLabelCount);
async function handleReviews(finalLabels) {
const reviewMessages = generateReviewMessages(finalLabels);
const hasReviewableLabels = finalLabels.some(label =>
['too-big', 'needs-codeowners'].includes(label)
);
@@ -533,8 +489,8 @@ jobs:
const apiData = await fetchApiData();
const baseRef = context.payload.pull_request.base.ref;
// Early exit for release and beta branches only
if (baseRef === 'release' || baseRef === 'beta') {
// Early exit for non-dev branches
if (baseRef !== 'dev') {
const branchLabels = await detectMergeBranch();
const finalLabels = Array.from(branchLabels);
@@ -579,8 +535,7 @@ jobs:
dashboardLabels,
actionsLabels,
codeOwnerLabels,
testLabels,
checkboxLabels
testLabels
] = await Promise.all([
detectMergeBranch(),
detectComponentPlatforms(apiData),
@@ -591,8 +546,7 @@ jobs:
detectDashboardChanges(),
detectGitHubActionsChanges(),
detectCodeOwner(),
detectTests(),
detectPRTemplateCheckboxes()
detectTests()
]);
// Combine all labels
@@ -606,8 +560,7 @@ jobs:
...dashboardLabels,
...actionsLabels,
...codeOwnerLabels,
...testLabels,
...checkboxLabels
...testLabels
]);
// Detect requirements based on all other labels
@@ -629,7 +582,6 @@ jobs:
// Handle too many labels (only for non-mega PRs)
const tooManyLabels = finalLabels.length > MAX_LABELS;
const originalLabelCount = finalLabels.length;
if (tooManyLabels && !isMegaPR && !finalLabels.includes('too-big')) {
finalLabels = ['too-big'];
@@ -638,7 +590,7 @@ jobs:
console.log('Computed labels:', finalLabels.join(', '));
// Handle reviews
await handleReviews(finalLabels, originalLabelCount);
await handleReviews(finalLabels);
// Apply labels
if (finalLabels.length > 0) {

View File

@@ -21,9 +21,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Set up Python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: "3.11"
@@ -47,7 +47,7 @@ jobs:
fi
- if: failure()
name: Review PR
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
script: |
await github.rest.pulls.createReview({
@@ -62,7 +62,7 @@ jobs:
run: git diff
- if: failure()
name: Archive artifacts
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v4.6.2
with:
name: generated-proto-files
path: |
@@ -70,7 +70,7 @@ jobs:
esphome/components/api/api_pb2_service.*
- if: success()
name: Dismiss review
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
script: |
let reviews = await github.rest.pulls.listReviews({

View File

@@ -6,7 +6,6 @@ on:
- ".clang-tidy"
- "platformio.ini"
- "requirements_dev.txt"
- "sdkconfig.defaults"
- ".clang-tidy.hash"
- "script/clang_tidy_hash.py"
- ".github/workflows/ci-clang-tidy-hash.yml"
@@ -21,10 +20,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Set up Python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: "3.11"
@@ -42,7 +41,7 @@ jobs:
- if: failure()
name: Request changes
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
script: |
await github.rest.pulls.createReview({
@@ -55,7 +54,7 @@ jobs:
- if: success()
name: Dismiss review
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
script: |
let reviews = await github.rest.pulls.listReviews({

View File

@@ -43,13 +43,13 @@ jobs:
- "docker"
# - "lint"
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@v4.2.2
- name: Set up Python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: "3.11"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@v3.11.1
- name: Set TAG
run: |

View File

@@ -1,111 +0,0 @@
---
name: Memory Impact Comment (Forks)
on:
workflow_run:
workflows: ["CI"]
types: [completed]
permissions:
contents: read
pull-requests: write
actions: read
jobs:
memory-impact-comment:
name: Post memory impact comment (fork PRs only)
runs-on: ubuntu-24.04
# Only run for PRs from forks that had successful CI runs
if: >
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_repository.full_name != github.repository
env:
GH_TOKEN: ${{ github.token }}
steps:
- name: Get PR details
id: pr
run: |
# Get PR details by searching for PR with matching head SHA
# The workflow_run.pull_requests field is often empty for forks
# Use paginate to handle repos with many open PRs
head_sha="${{ github.event.workflow_run.head_sha }}"
pr_data=$(gh api --paginate "/repos/${{ github.repository }}/pulls" \
--jq ".[] | select(.head.sha == \"$head_sha\") | {number: .number, base_ref: .base.ref}" \
| head -n 1)
if [ -z "$pr_data" ]; then
echo "No PR found for SHA $head_sha, skipping"
echo "skip=true" >> "$GITHUB_OUTPUT"
exit 0
fi
pr_number=$(echo "$pr_data" | jq -r '.number')
base_ref=$(echo "$pr_data" | jq -r '.base_ref')
echo "pr_number=$pr_number" >> "$GITHUB_OUTPUT"
echo "base_ref=$base_ref" >> "$GITHUB_OUTPUT"
echo "Found PR #$pr_number targeting base branch: $base_ref"
- name: Check out code from base repository
if: steps.pr.outputs.skip != 'true'
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# Always check out from the base repository (esphome/esphome), never from forks
# Use the PR's target branch to ensure we run trusted code from the main repo
repository: ${{ github.repository }}
ref: ${{ steps.pr.outputs.base_ref }}
- name: Restore Python
if: steps.pr.outputs.skip != 'true'
uses: ./.github/actions/restore-python
with:
python-version: "3.11"
cache-key: ${{ hashFiles('.cache-key') }}
- name: Download memory analysis artifacts
if: steps.pr.outputs.skip != 'true'
run: |
run_id="${{ github.event.workflow_run.id }}"
echo "Downloading artifacts from workflow run $run_id"
mkdir -p memory-analysis
# Download target analysis artifact
if gh run download --name "memory-analysis-target" --dir memory-analysis --repo "${{ github.repository }}" "$run_id"; then
echo "Downloaded memory-analysis-target artifact."
else
echo "No memory-analysis-target artifact found."
fi
# Download PR analysis artifact
if gh run download --name "memory-analysis-pr" --dir memory-analysis --repo "${{ github.repository }}" "$run_id"; then
echo "Downloaded memory-analysis-pr artifact."
else
echo "No memory-analysis-pr artifact found."
fi
- name: Check if artifacts exist
id: check
if: steps.pr.outputs.skip != 'true'
run: |
if [ -f ./memory-analysis/memory-analysis-target.json ] && [ -f ./memory-analysis/memory-analysis-pr.json ]; then
echo "found=true" >> "$GITHUB_OUTPUT"
else
echo "found=false" >> "$GITHUB_OUTPUT"
echo "Memory analysis artifacts not found, skipping comment"
fi
- name: Post or update PR comment
if: steps.pr.outputs.skip != 'true' && steps.check.outputs.found == 'true'
env:
PR_NUMBER: ${{ steps.pr.outputs.pr_number }}
run: |
. venv/bin/activate
# Pass PR number and JSON file paths directly to Python script
# Let Python parse the JSON to avoid shell injection risks
# The script will validate and sanitize all inputs
python script/ci_memory_impact_comment.py \
--pr-number "$PR_NUMBER" \
--target-json ./memory-analysis/memory-analysis-target.json \
--pr-json ./memory-analysis/memory-analysis-pr.json

View File

@@ -36,18 +36,18 @@ jobs:
cache-key: ${{ steps.cache-key.outputs.key }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Generate cache-key
id: cache-key
run: echo key="${{ hashFiles('requirements.txt', 'requirements_test.txt', '.pre-commit-config.yaml') }}" >> $GITHUB_OUTPUT
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
id: python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@v4.2.3
with:
path: venv
# yamllint disable-line rule:line-length
@@ -70,7 +70,7 @@ jobs:
if: needs.determine-jobs.outputs.python-linters == 'true'
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Restore Python
uses: ./.github/actions/restore-python
with:
@@ -91,7 +91,7 @@ jobs:
- common
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Restore Python
uses: ./.github/actions/restore-python
with:
@@ -105,7 +105,6 @@ jobs:
script/ci-custom.py
script/build_codeowners.py --check
script/build_language_schema.py --check
script/generate-esp32-boards.py --check
pytest:
name: Run pytest
@@ -114,6 +113,7 @@ jobs:
matrix:
python-version:
- "3.11"
- "3.12"
- "3.13"
os:
- ubuntu-latest
@@ -125,14 +125,18 @@ jobs:
# version used for docker images on Windows and macOS
- python-version: "3.13"
os: windows-latest
- python-version: "3.12"
os: windows-latest
- python-version: "3.13"
os: macOS-latest
- python-version: "3.12"
os: macOS-latest
runs-on: ${{ matrix.os }}
needs:
- common
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Restore Python
id: restore-python
uses: ./.github/actions/restore-python
@@ -152,12 +156,12 @@ jobs:
. venv/bin/activate
pytest -vv --cov-report=xml --tb=native -n auto tests --ignore=tests/integration/
- name: Upload coverage to Codecov
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@v5.4.3
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: Save Python virtual environment cache
if: github.ref == 'refs/heads/dev'
uses: actions/cache/save@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache/save@v4.2.3
with:
path: venv
key: ${{ runner.os }}-${{ steps.restore-python.outputs.python-version }}-venv-${{ needs.common.outputs.cache-key }}
@@ -170,20 +174,12 @@ jobs:
outputs:
integration-tests: ${{ steps.determine.outputs.integration-tests }}
clang-tidy: ${{ steps.determine.outputs.clang-tidy }}
clang-tidy-mode: ${{ steps.determine.outputs.clang-tidy-mode }}
python-linters: ${{ steps.determine.outputs.python-linters }}
changed-components: ${{ steps.determine.outputs.changed-components }}
changed-components-with-tests: ${{ steps.determine.outputs.changed-components-with-tests }}
directly-changed-components-with-tests: ${{ steps.determine.outputs.directly-changed-components-with-tests }}
component-test-count: ${{ steps.determine.outputs.component-test-count }}
changed-cpp-file-count: ${{ steps.determine.outputs.changed-cpp-file-count }}
memory_impact: ${{ steps.determine.outputs.memory-impact }}
cpp-unit-tests-run-all: ${{ steps.determine.outputs.cpp-unit-tests-run-all }}
cpp-unit-tests-components: ${{ steps.determine.outputs.cpp-unit-tests-components }}
component-test-batches: ${{ steps.determine.outputs.component-test-batches }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
with:
# Fetch enough history to find the merge base
fetch-depth: 2
@@ -192,11 +188,6 @@ jobs:
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Restore components graph cache
uses: actions/cache/restore@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: .temp/components_graph.json
key: components-graph-${{ hashFiles('esphome/components/**/*.py') }}
- name: Determine which tests to run
id: determine
env:
@@ -210,23 +201,9 @@ jobs:
# Extract individual fields
echo "integration-tests=$(echo "$output" | jq -r '.integration_tests')" >> $GITHUB_OUTPUT
echo "clang-tidy=$(echo "$output" | jq -r '.clang_tidy')" >> $GITHUB_OUTPUT
echo "clang-tidy-mode=$(echo "$output" | jq -r '.clang_tidy_mode')" >> $GITHUB_OUTPUT
echo "python-linters=$(echo "$output" | jq -r '.python_linters')" >> $GITHUB_OUTPUT
echo "changed-components=$(echo "$output" | jq -c '.changed_components')" >> $GITHUB_OUTPUT
echo "changed-components-with-tests=$(echo "$output" | jq -c '.changed_components_with_tests')" >> $GITHUB_OUTPUT
echo "directly-changed-components-with-tests=$(echo "$output" | jq -c '.directly_changed_components_with_tests')" >> $GITHUB_OUTPUT
echo "component-test-count=$(echo "$output" | jq -r '.component_test_count')" >> $GITHUB_OUTPUT
echo "changed-cpp-file-count=$(echo "$output" | jq -r '.changed_cpp_file_count')" >> $GITHUB_OUTPUT
echo "memory-impact=$(echo "$output" | jq -c '.memory_impact')" >> $GITHUB_OUTPUT
echo "cpp-unit-tests-run-all=$(echo "$output" | jq -r '.cpp_unit_tests_run_all')" >> $GITHUB_OUTPUT
echo "cpp-unit-tests-components=$(echo "$output" | jq -c '.cpp_unit_tests_components')" >> $GITHUB_OUTPUT
echo "component-test-batches=$(echo "$output" | jq -c '.component_test_batches')" >> $GITHUB_OUTPUT
- name: Save components graph cache
if: github.ref == 'refs/heads/dev'
uses: actions/cache/save@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: .temp/components_graph.json
key: components-graph-${{ hashFiles('esphome/components/**/*.py') }}
integration-tests:
name: Run integration tests
@@ -237,15 +214,15 @@ jobs:
if: needs.determine-jobs.outputs.integration-tests == 'true'
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Set up Python 3.13
id: python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: "3.13"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@v4.2.3
with:
path: venv
key: ${{ runner.os }}-${{ steps.python.outputs.python-version }}-venv-${{ needs.common.outputs.cache-key }}
@@ -264,34 +241,7 @@ jobs:
. venv/bin/activate
pytest -vv --no-cov --tb=native -n auto tests/integration/
cpp-unit-tests:
name: Run C++ unit tests
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && (needs.determine-jobs.outputs.cpp-unit-tests-run-all == 'true' || needs.determine-jobs.outputs.cpp-unit-tests-components != '[]')
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Run cpp_unit_test.py
run: |
. venv/bin/activate
if [ "${{ needs.determine-jobs.outputs.cpp-unit-tests-run-all }}" = "true" ]; then
script/cpp_unit_test.py --all
else
ARGS=$(echo '${{ needs.determine-jobs.outputs.cpp-unit-tests-components }}' | jq -r '.[] | @sh' | xargs)
script/cpp_unit_test.py $ARGS
fi
clang-tidy-single:
clang-tidy:
name: ${{ matrix.name }}
runs-on: ubuntu-24.04
needs:
@@ -309,19 +259,35 @@ jobs:
name: Run script/clang-tidy for ESP8266
options: --environment esp8266-arduino-tidy --grep USE_ESP8266
pio_cache_key: tidyesp8266
- id: clang-tidy
name: Run script/clang-tidy for ESP32 Arduino 1/4
options: --environment esp32-arduino-tidy --split-num 4 --split-at 1
pio_cache_key: tidyesp32
- id: clang-tidy
name: Run script/clang-tidy for ESP32 Arduino 2/4
options: --environment esp32-arduino-tidy --split-num 4 --split-at 2
pio_cache_key: tidyesp32
- id: clang-tidy
name: Run script/clang-tidy for ESP32 Arduino 3/4
options: --environment esp32-arduino-tidy --split-num 4 --split-at 3
pio_cache_key: tidyesp32
- id: clang-tidy
name: Run script/clang-tidy for ESP32 Arduino 4/4
options: --environment esp32-arduino-tidy --split-num 4 --split-at 4
pio_cache_key: tidyesp32
- id: clang-tidy
name: Run script/clang-tidy for ESP32 IDF
options: --environment esp32-idf-tidy --grep USE_ESP_IDF
pio_cache_key: tidyesp32-idf
- id: clang-tidy
name: Run script/clang-tidy for ZEPHYR
options: --environment nrf52-tidy --grep USE_ZEPHYR --grep USE_NRF52
options: --environment nrf52-tidy --grep USE_ZEPHYR
pio_cache_key: tidy-zephyr
ignore_errors: false
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
with:
# Need history for HEAD~1 to work for checking changed files
fetch-depth: 2
@@ -334,14 +300,14 @@ jobs:
- name: Cache platformio
if: github.ref == 'refs/heads/dev'
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@v4.2.3
with:
path: ~/.platformio
key: platformio-${{ matrix.pio_cache_key }}-${{ hashFiles('platformio.ini') }}
- name: Cache platformio
if: github.ref != 'refs/heads/dev'
uses: actions/cache/restore@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache/restore@v4.2.3
with:
path: ~/.platformio
key: platformio-${{ matrix.pio_cache_key }}-${{ hashFiles('platformio.ini') }}
@@ -389,556 +355,122 @@ jobs:
# yamllint disable-line rule:line-length
if: always()
clang-tidy-nosplit:
name: Run script/clang-tidy for ESP32 Arduino
test-build-components:
name: Component test ${{ matrix.file }}
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: needs.determine-jobs.outputs.clang-tidy-mode == 'nosplit'
env:
GH_TOKEN: ${{ github.token }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# Need history for HEAD~1 to work for checking changed files
fetch-depth: 2
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Cache platformio
if: github.ref == 'refs/heads/dev'
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: ~/.platformio
key: platformio-tidyesp32-${{ hashFiles('platformio.ini') }}
- name: Cache platformio
if: github.ref != 'refs/heads/dev'
uses: actions/cache/restore@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: ~/.platformio
key: platformio-tidyesp32-${{ hashFiles('platformio.ini') }}
- name: Register problem matchers
run: |
echo "::add-matcher::.github/workflows/matchers/gcc.json"
echo "::add-matcher::.github/workflows/matchers/clang-tidy.json"
- name: Check if full clang-tidy scan needed
id: check_full_scan
run: |
. venv/bin/activate
if python script/clang_tidy_hash.py --check; then
echo "full_scan=true" >> $GITHUB_OUTPUT
echo "reason=hash_changed" >> $GITHUB_OUTPUT
else
echo "full_scan=false" >> $GITHUB_OUTPUT
echo "reason=normal" >> $GITHUB_OUTPUT
fi
- name: Run clang-tidy
run: |
. venv/bin/activate
if [ "${{ steps.check_full_scan.outputs.full_scan }}" = "true" ]; then
echo "Running FULL clang-tidy scan (hash changed)"
script/clang-tidy --all-headers --fix --environment esp32-arduino-tidy
else
echo "Running clang-tidy on changed files only"
script/clang-tidy --all-headers --fix --changed --environment esp32-arduino-tidy
fi
env:
# Also cache libdeps, store them in a ~/.platformio subfolder
PLATFORMIO_LIBDEPS_DIR: ~/.platformio/libdeps
- name: Suggested changes
run: script/ci-suggest-changes
if: always()
clang-tidy-split:
name: ${{ matrix.name }}
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: needs.determine-jobs.outputs.clang-tidy-mode == 'split'
env:
GH_TOKEN: ${{ github.token }}
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) > 0 && fromJSON(needs.determine-jobs.outputs.component-test-count) < 100
strategy:
fail-fast: false
max-parallel: 2
matrix:
include:
- id: clang-tidy
name: Run script/clang-tidy for ESP32 Arduino 1/4
options: --environment esp32-arduino-tidy --split-num 4 --split-at 1
- id: clang-tidy
name: Run script/clang-tidy for ESP32 Arduino 2/4
options: --environment esp32-arduino-tidy --split-num 4 --split-at 2
- id: clang-tidy
name: Run script/clang-tidy for ESP32 Arduino 3/4
options: --environment esp32-arduino-tidy --split-num 4 --split-at 3
- id: clang-tidy
name: Run script/clang-tidy for ESP32 Arduino 4/4
options: --environment esp32-arduino-tidy --split-num 4 --split-at 4
file: ${{ fromJson(needs.determine-jobs.outputs.changed-components) }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# Need history for HEAD~1 to work for checking changed files
fetch-depth: 2
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install libsdl2-dev
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Cache platformio
if: github.ref == 'refs/heads/dev'
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: ~/.platformio
key: platformio-tidyesp32-${{ hashFiles('platformio.ini') }}
- name: Cache platformio
if: github.ref != 'refs/heads/dev'
uses: actions/cache/restore@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: ~/.platformio
key: platformio-tidyesp32-${{ hashFiles('platformio.ini') }}
- name: Register problem matchers
run: |
echo "::add-matcher::.github/workflows/matchers/gcc.json"
echo "::add-matcher::.github/workflows/matchers/clang-tidy.json"
- name: Check if full clang-tidy scan needed
id: check_full_scan
- name: test_build_components -e config -c ${{ matrix.file }}
run: |
. venv/bin/activate
if python script/clang_tidy_hash.py --check; then
echo "full_scan=true" >> $GITHUB_OUTPUT
echo "reason=hash_changed" >> $GITHUB_OUTPUT
else
echo "full_scan=false" >> $GITHUB_OUTPUT
echo "reason=normal" >> $GITHUB_OUTPUT
fi
- name: Run clang-tidy
./script/test_build_components -e config -c ${{ matrix.file }}
- name: test_build_components -e compile -c ${{ matrix.file }}
run: |
. venv/bin/activate
if [ "${{ steps.check_full_scan.outputs.full_scan }}" = "true" ]; then
echo "Running FULL clang-tidy scan (hash changed)"
script/clang-tidy --all-headers --fix ${{ matrix.options }}
else
echo "Running clang-tidy on changed files only"
script/clang-tidy --all-headers --fix --changed ${{ matrix.options }}
fi
env:
# Also cache libdeps, store them in a ~/.platformio subfolder
PLATFORMIO_LIBDEPS_DIR: ~/.platformio/libdeps
./script/test_build_components -e compile -c ${{ matrix.file }}
- name: Suggested changes
run: script/ci-suggest-changes
if: always()
test-build-components-split:
name: Test components batch (${{ matrix.components }})
test-build-components-splitter:
name: Split components for testing into 20 groups maximum
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) > 0
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) >= 100
outputs:
matrix: ${{ steps.split.outputs.components }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Split components into 20 groups
id: split
run: |
components=$(echo '${{ needs.determine-jobs.outputs.changed-components }}' | jq -c '.[]' | shuf | jq -s -c '[_nwise(20) | join(" ")]')
echo "components=$components" >> $GITHUB_OUTPUT
test-build-components-split:
name: Test split components
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
- test-build-components-splitter
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.component-test-count) >= 100
strategy:
fail-fast: false
max-parallel: ${{ (startsWith(github.base_ref, 'beta') || startsWith(github.base_ref, 'release')) && 8 || 4 }}
max-parallel: 4
matrix:
components: ${{ fromJson(needs.determine-jobs.outputs.component-test-batches) }}
components: ${{ fromJson(needs.test-build-components-splitter.outputs.matrix) }}
steps:
- name: Show disk space
run: |
echo "Available disk space:"
df -h
- name: List components
run: echo ${{ matrix.components }}
- name: Cache apt packages
uses: awalsh128/cache-apt-pkgs-action@acb598e5ddbc6f68a970c5da0688d2f3a9f04d05 # v1.5.3
with:
packages: libsdl2-dev
version: 1.0
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install libsdl2-dev
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Validate and compile components with intelligent grouping
- name: Validate config
run: |
. venv/bin/activate
# Check if /mnt has more free space than / before bind mounting
# Extract available space in KB for comparison
root_avail=$(df -k / | awk 'NR==2 {print $4}')
mnt_avail=$(df -k /mnt 2>/dev/null | awk 'NR==2 {print $4}')
echo "Available space: / has ${root_avail}KB, /mnt has ${mnt_avail}KB"
# Only use /mnt if it has more space than /
if [ -n "$mnt_avail" ] && [ "$mnt_avail" -gt "$root_avail" ]; then
echo "Using /mnt for build files (more space available)"
# Bind mount PlatformIO directory to /mnt (tools, packages, build cache all go there)
sudo mkdir -p /mnt/platformio
sudo chown $USER:$USER /mnt/platformio
mkdir -p ~/.platformio
sudo mount --bind /mnt/platformio ~/.platformio
# Bind mount test build directory to /mnt
sudo mkdir -p /mnt/test_build_components_build
sudo chown $USER:$USER /mnt/test_build_components_build
mkdir -p tests/test_build_components/build
sudo mount --bind /mnt/test_build_components_build tests/test_build_components/build
else
echo "Using / for build files (more space available than /mnt or /mnt unavailable)"
fi
# Convert space-separated components to comma-separated for Python script
components_csv=$(echo "${{ matrix.components }}" | tr ' ' ',')
# Only isolate directly changed components when targeting dev branch
# For beta/release branches, group everything for faster CI
#
# WHY ISOLATE DIRECTLY CHANGED COMPONENTS?
# - Isolated tests run WITHOUT --testing-mode, enabling full validation
# - This catches pin conflicts and other issues in directly changed code
# - Grouped tests use --testing-mode to allow config merging (disables some checks)
# - Dependencies are safe to group since they weren't modified in this PR
if [[ "${{ github.base_ref }}" == beta* ]] || [[ "${{ github.base_ref }}" == release* ]]; then
directly_changed_csv=""
echo "Testing components: $components_csv"
echo "Target branch: ${{ github.base_ref }} - grouping all components"
else
directly_changed_csv=$(echo '${{ needs.determine-jobs.outputs.directly-changed-components-with-tests }}' | jq -r 'join(",")')
echo "Testing components: $components_csv"
echo "Target branch: ${{ github.base_ref }} - isolating directly changed components: $directly_changed_csv"
fi
echo ""
# Show disk space before validation (after bind mounts setup)
echo "Disk space before config validation:"
df -h
echo ""
# Run config validation with grouping and isolation
python3 script/test_build_components.py -e config -c "$components_csv" -f --isolate "$directly_changed_csv"
echo ""
echo "Config validation passed! Starting compilation..."
echo ""
# Show disk space before compilation
echo "Disk space before compilation:"
df -h
echo ""
# Run compilation with grouping and isolation
python3 script/test_build_components.py -e compile -c "$components_csv" -f --isolate "$directly_changed_csv"
for component in ${{ matrix.components }}; do
./script/test_build_components -e config -c $component
done
- name: Compile config
run: |
. venv/bin/activate
mkdir build_cache
export PLATFORMIO_BUILD_CACHE_DIR=$PWD/build_cache
for component in ${{ matrix.components }}; do
./script/test_build_components -e compile -c $component
done
pre-commit-ci-lite:
name: pre-commit.ci lite
runs-on: ubuntu-latest
needs:
- common
if: github.event_name == 'pull_request' && !startsWith(github.base_ref, 'beta') && !startsWith(github.base_ref, 'release')
if: github.event_name == 'pull_request' && github.base_ref != 'beta' && github.base_ref != 'release'
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- uses: esphome/pre-commit-action@43cd1109c09c544d97196f7730ee5b2e0cc6d81e # v3.0.1 fork with pinned actions/cache
- uses: pre-commit/action@v3.0.1
env:
SKIP: pylint,clang-tidy-hash
- uses: pre-commit-ci/lite-action@5d6cc0eb514c891a40562a58a8e71576c5c7fb43 # v1.1.0
- uses: pre-commit-ci/lite-action@v1.1.0
if: always()
memory-impact-target-branch:
name: Build target branch for memory impact
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.memory_impact).should_run == 'true'
outputs:
ram_usage: ${{ steps.extract.outputs.ram_usage }}
flash_usage: ${{ steps.extract.outputs.flash_usage }}
cache_hit: ${{ steps.cache-memory-analysis.outputs.cache-hit }}
skip: ${{ steps.check-script.outputs.skip }}
steps:
- name: Check out target branch
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
ref: ${{ github.base_ref }}
# Check if memory impact extraction script exists on target branch
# If not, skip the analysis (this handles older branches that don't have the feature)
- name: Check for memory impact script
id: check-script
run: |
if [ -f "script/ci_memory_impact_extract.py" ]; then
echo "skip=false" >> $GITHUB_OUTPUT
else
echo "skip=true" >> $GITHUB_OUTPUT
echo "::warning::ci_memory_impact_extract.py not found on target branch, skipping memory impact analysis"
fi
# All remaining steps only run if script exists
- name: Generate cache key
id: cache-key
if: steps.check-script.outputs.skip != 'true'
run: |
# Get the commit SHA of the target branch
target_sha=$(git rev-parse HEAD)
# Hash the build infrastructure files (all files that affect build/analysis)
infra_hash=$(cat \
script/test_build_components.py \
script/ci_memory_impact_extract.py \
script/analyze_component_buses.py \
script/merge_component_configs.py \
script/ci_helpers.py \
.github/workflows/ci.yml \
| sha256sum | cut -d' ' -f1)
# Get platform and components from job inputs
platform="${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}"
components='${{ toJSON(fromJSON(needs.determine-jobs.outputs.memory_impact).components) }}'
components_hash=$(echo "$components" | sha256sum | cut -d' ' -f1)
# Combine into cache key
cache_key="memory-analysis-target-${target_sha}-${infra_hash}-${platform}-${components_hash}"
echo "cache-key=${cache_key}" >> $GITHUB_OUTPUT
echo "Cache key: ${cache_key}"
- name: Restore cached memory analysis
id: cache-memory-analysis
if: steps.check-script.outputs.skip != 'true'
uses: actions/cache/restore@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: memory-analysis-target.json
key: ${{ steps.cache-key.outputs.cache-key }}
- name: Cache status
if: steps.check-script.outputs.skip != 'true'
run: |
if [ "${{ steps.cache-memory-analysis.outputs.cache-hit }}" == "true" ]; then
echo "✓ Cache hit! Using cached memory analysis results."
echo " Skipping build step to save time."
else
echo "✗ Cache miss. Will build and analyze memory usage."
fi
- name: Restore Python
if: steps.check-script.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true'
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Cache platformio
if: steps.check-script.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true'
uses: actions/cache/restore@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: ~/.platformio
key: platformio-memory-${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}-${{ hashFiles('platformio.ini') }}
- name: Build, compile, and analyze memory
if: steps.check-script.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true'
id: build
run: |
. venv/bin/activate
components='${{ toJSON(fromJSON(needs.determine-jobs.outputs.memory_impact).components) }}'
platform="${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}"
echo "Building with test_build_components.py for $platform with components:"
echo "$components" | jq -r '.[]' | sed 's/^/ - /'
# Use test_build_components.py which handles grouping automatically
# Pass components as comma-separated list
component_list=$(echo "$components" | jq -r 'join(",")')
echo "Compiling with test_build_components.py..."
# Run build and extract memory with auto-detection of build directory for detailed analysis
# Use tee to show output in CI while also piping to extraction script
python script/test_build_components.py \
-e compile \
-c "$component_list" \
-t "$platform" 2>&1 | \
tee /dev/stderr | \
python script/ci_memory_impact_extract.py \
--output-env \
--output-json memory-analysis-target.json
# Add metadata to JSON before caching
python script/ci_add_metadata_to_json.py \
--json-file memory-analysis-target.json \
--components "$components" \
--platform "$platform"
- name: Save memory analysis to cache
if: steps.check-script.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true' && steps.build.outcome == 'success'
uses: actions/cache/save@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: memory-analysis-target.json
key: ${{ steps.cache-key.outputs.cache-key }}
- name: Extract memory usage for outputs
id: extract
if: steps.check-script.outputs.skip != 'true'
run: |
if [ -f memory-analysis-target.json ]; then
ram=$(jq -r '.ram_bytes' memory-analysis-target.json)
flash=$(jq -r '.flash_bytes' memory-analysis-target.json)
echo "ram_usage=${ram}" >> $GITHUB_OUTPUT
echo "flash_usage=${flash}" >> $GITHUB_OUTPUT
echo "RAM: ${ram} bytes, Flash: ${flash} bytes"
else
echo "Error: memory-analysis-target.json not found"
exit 1
fi
- name: Upload memory analysis JSON
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: memory-analysis-target
path: memory-analysis-target.json
if-no-files-found: warn
retention-days: 1
memory-impact-pr-branch:
name: Build PR branch for memory impact
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: github.event_name == 'pull_request' && fromJSON(needs.determine-jobs.outputs.memory_impact).should_run == 'true'
outputs:
ram_usage: ${{ steps.extract.outputs.ram_usage }}
flash_usage: ${{ steps.extract.outputs.flash_usage }}
steps:
- name: Check out PR branch
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Cache platformio
uses: actions/cache/restore@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: ~/.platformio
key: platformio-memory-${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}-${{ hashFiles('platformio.ini') }}
- name: Build, compile, and analyze memory
id: extract
run: |
. venv/bin/activate
components='${{ toJSON(fromJSON(needs.determine-jobs.outputs.memory_impact).components) }}'
platform="${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}"
echo "Building with test_build_components.py for $platform with components:"
echo "$components" | jq -r '.[]' | sed 's/^/ - /'
# Use test_build_components.py which handles grouping automatically
# Pass components as comma-separated list
component_list=$(echo "$components" | jq -r 'join(",")')
echo "Compiling with test_build_components.py..."
# Run build and extract memory with auto-detection of build directory for detailed analysis
# Use tee to show output in CI while also piping to extraction script
python script/test_build_components.py \
-e compile \
-c "$component_list" \
-t "$platform" 2>&1 | \
tee /dev/stderr | \
python script/ci_memory_impact_extract.py \
--output-env \
--output-json memory-analysis-pr.json
# Add metadata to JSON (components and platform are in shell variables above)
python script/ci_add_metadata_to_json.py \
--json-file memory-analysis-pr.json \
--components "$components" \
--platform "$platform"
- name: Upload memory analysis JSON
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: memory-analysis-pr
path: memory-analysis-pr.json
if-no-files-found: warn
retention-days: 1
memory-impact-comment:
name: Comment memory impact
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
- memory-impact-target-branch
- memory-impact-pr-branch
if: github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository && fromJSON(needs.determine-jobs.outputs.memory_impact).should_run == 'true' && needs.memory-impact-target-branch.outputs.skip != 'true'
permissions:
contents: read
pull-requests: write
env:
GH_TOKEN: ${{ github.token }}
steps:
- name: Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Download target analysis JSON
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: memory-analysis-target
path: ./memory-analysis
continue-on-error: true
- name: Download PR analysis JSON
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: memory-analysis-pr
path: ./memory-analysis
continue-on-error: true
- name: Post or update PR comment
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
run: |
. venv/bin/activate
# Pass JSON file paths directly to Python script
# All data is extracted from JSON files for security
python script/ci_memory_impact_comment.py \
--pr-number "$PR_NUMBER" \
--target-json ./memory-analysis/memory-analysis-target.json \
--pr-json ./memory-analysis/memory-analysis-pr.json
ci-status:
name: CI Status
runs-on: ubuntu-24.04
@@ -948,24 +480,21 @@ jobs:
- pylint
- pytest
- integration-tests
- clang-tidy-single
- clang-tidy-nosplit
- clang-tidy-split
- clang-tidy
- determine-jobs
- test-build-components
- test-build-components-splitter
- test-build-components-split
- pre-commit-ci-lite
- memory-impact-target-branch
- memory-impact-pr-branch
- memory-impact-comment
if: always()
steps:
- name: Check job results
- name: Success
if: ${{ !(contains(needs.*.result, 'failure')) }}
run: exit 0
- name: Failure
if: ${{ contains(needs.*.result, 'failure') }}
env:
NEEDS_JSON: ${{ toJSON(needs) }}
JSON_DOC: ${{ toJSON(needs) }}
run: |
# memory-impact-target-branch is allowed to fail without blocking CI.
# This job builds the target branch (dev/beta/release) which may fail because:
# 1. The target branch has a build issue independent of this PR
# 2. This PR fixes a build issue on the target branch
# In either case, we only care that the PR branch builds successfully.
echo "$NEEDS_JSON" | jq -e 'del(.["memory-impact-target-branch"]) | all(.result != "failure")'
echo $JSON_DOC | jq
exit 1

View File

@@ -21,11 +21,11 @@ permissions:
jobs:
request-codeowner-reviews:
name: Run
if: ${{ github.repository == 'esphome/esphome' && !github.event.pull_request.draft }}
if: ${{ !github.event.pull_request.draft }}
runs-on: ubuntu-latest
steps:
- name: Request reviews from component codeowners
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
script: |
const owner = context.repo.owner;

View File

@@ -54,11 +54,11 @@ jobs:
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@cdefb33c0f6224e58673d9004f47f7cb3e328b89 # v4.31.10
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
@@ -86,6 +86,6 @@ jobs:
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@cdefb33c0f6224e58673d9004f47f7cb3e328b89 # v4.31.10
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Add external component comment
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |

View File

@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify codeowners for component issues
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
script: |
const owner = context.repo.owner;

24
.github/workflows/needs-docs.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: Needs Docs
on:
pull_request:
types: [labeled, unlabeled]
jobs:
check:
name: Check
runs-on: ubuntu-latest
steps:
- name: Check for needs-docs label
uses: actions/github-script@v7.0.1
with:
script: |
const { data: labels } = await github.rest.issues.listLabelsOnIssue({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number
});
const needsDocs = labels.find(label => label.name === 'needs-docs');
if (needsDocs) {
core.setFailed('Pull request needs docs');
}

View File

@@ -20,7 +20,7 @@ jobs:
branch_build: ${{ steps.tag.outputs.branch_build }}
deploy_env: ${{ steps.tag.outputs.deploy_env }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@v4.2.2
- name: Get tag
id: tag
# yamllint disable rule:line-length
@@ -60,9 +60,9 @@ jobs:
contents: read
id-token: write
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@v4.2.2
- name: Set up Python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: "3.x"
- name: Build
@@ -70,7 +70,7 @@ jobs:
pip3 install build
python3 -m build
- name: Publish
uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0
uses: pypa/gh-action-pypi-publish@v1.12.4
with:
skip-existing: true
@@ -92,22 +92,22 @@ jobs:
os: "ubuntu-24.04-arm"
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@v4.2.2
- name: Set up Python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: "3.11"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@v3.11.1
- name: Log in to docker hub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@v3.4.0
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to the GitHub container registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -138,7 +138,7 @@ jobs:
# version: ${{ needs.init.outputs.tag }}
- name: Upload digests
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v4.6.2
with:
name: digests-${{ matrix.platform.arch }}
path: /tmp/digests
@@ -168,27 +168,27 @@ jobs:
- ghcr
- dockerhub
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@v4.2.2
- name: Download digests
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@v4.3.0
with:
pattern: digests-*
path: /tmp/digests
merge-multiple: true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@v3.11.1
- name: Log in to docker hub
if: matrix.registry == 'dockerhub'
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@v3.4.0
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to the GitHub container registry
if: matrix.registry == 'ghcr'
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -219,19 +219,10 @@ jobs:
- init
- deploy-manifest
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
owner: esphome
repositories: home-assistant-addon
- name: Trigger Workflow
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
github-token: ${{ steps.generate-token.outputs.token }}
github-token: ${{ secrets.DEPLOY_HA_ADDON_REPO_TOKEN }}
script: |
let description = "ESPHome";
if (context.eventName == "release") {
@@ -254,19 +245,10 @@ jobs:
needs: [init]
environment: ${{ needs.init.outputs.deploy_env }}
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
owner: esphome
repositories: esphome-schema
- name: Trigger Workflow
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7.0.1
with:
github-token: ${{ steps.generate-token.outputs.token }}
github-token: ${{ secrets.DEPLOY_ESPHOME_SCHEMA_REPO_TOKEN }}
script: |
github.rest.actions.createWorkflowDispatch({
owner: "esphome",
@@ -277,34 +259,3 @@ jobs:
version: "${{ needs.init.outputs.tag }}",
}
})
version-notifier:
if: github.repository == 'esphome/esphome' && needs.init.outputs.branch_build == 'false'
runs-on: ubuntu-latest
needs:
- init
- deploy-manifest
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
owner: esphome
repositories: version-notifier
- name: Trigger Workflow
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
github.rest.actions.createWorkflowDispatch({
owner: "esphome",
repo: "version-notifier",
workflow_id: "notify.yml",
ref: "main",
inputs: {
version: "${{ needs.init.outputs.tag }}",
}
})

View File

@@ -15,52 +15,36 @@ concurrency:
jobs:
stale:
if: github.repository_owner == 'esphome'
runs-on: ubuntu-latest
steps:
- name: Stale
uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
- uses: actions/stale@v9.1.0
with:
debug-only: ${{ github.ref != 'refs/heads/dev' }} # Dry-run when not run on dev branch
remove-stale-when-updated: true
operations-per-run: 400
# The 90 day stale policy for PRs
# - PRs
# - No PRs marked as "not-stale"
# - No Issues (see below)
days-before-pr-stale: 90
days-before-pr-close: 7
days-before-issue-stale: -1
days-before-issue-close: -1
remove-stale-when-updated: true
stale-pr-label: "stale"
exempt-pr-labels: "not-stale"
stale-pr-message: >
There hasn't been any activity on this pull request recently. This
pull request has been automatically marked as stale because of that
and will be closed if no further activity occurs within 7 days.
Thank you for your contributions.
If you are the author of this PR, please leave a comment if you want
to keep it open. Also, please rebase your PR onto the latest dev
branch to ensure that it's up to date with the latest changes.
Thank you for your contribution!
# The 90 day stale policy for Issues
# - Issues
# - No Issues marked as "not-stale"
# - No PRs (see above)
days-before-issue-stale: 90
days-before-issue-close: 7
# Use stale to automatically close issues with a
# reference to the issue tracker
close-issues:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9.1.0
with:
days-before-pr-stale: -1
days-before-pr-close: -1
days-before-issue-stale: 1
days-before-issue-close: 1
remove-stale-when-updated: true
stale-issue-label: "stale"
exempt-issue-labels: "not-stale"
stale-issue-message: >
There hasn't been any activity on this issue recently. Due to the
high number of incoming GitHub notifications, we have to clean some
of the old issues, as many of them have already been resolved with
the latest updates.
Please make sure to update to the latest ESPHome version and
check if that solves the issue. Let us know if that works for you by
adding a comment 👍
This issue has now been marked as stale and will be closed if no
further activity occurs. Thank you for your contributions.
https://github.com/esphome/esphome/issues/430

View File

@@ -1,31 +0,0 @@
name: Status check labels
on:
pull_request:
types: [labeled, unlabeled]
jobs:
check:
name: Check ${{ matrix.label }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
label:
- needs-docs
- merge-after-release
- chained-pr
steps:
- name: Check for ${{ matrix.label }} label
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const { data: labels } = await github.rest.issues.listLabelsOnIssue({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number
});
const hasLabel = labels.find(label => label.name === '${{ matrix.label }}');
if (hasLabel) {
core.setFailed('Pull request cannot be merged, it is labeled as ${{ matrix.label }}');
}

View File

@@ -13,16 +13,16 @@ jobs:
if: github.repository == 'esphome/esphome'
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
- name: Checkout Home Assistant
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@v4.2.2
with:
repository: home-assistant/core
path: lib/home-assistant
- name: Setup Python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@v5.6.0
with:
python-version: 3.13
@@ -30,18 +30,13 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install -e lib/home-assistant
pip install -r requirements_test.txt pre-commit
- name: Sync
run: |
python ./script/sync-device_class.py
- name: Run pre-commit hooks
run: |
python script/run-in-env.py pre-commit run --all-files
- name: Commit changes
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 # v8.0.0
uses: peter-evans/create-pull-request@v7.0.8
with:
commit-message: "Synchronise Device Classes from Home Assistant"
committer: esphomebot <esphome@openhomefoundation.org>

4
.gitignore vendored
View File

@@ -91,10 +91,6 @@ venv-*/
# mypy
.mypy_cache/
# nix
/default.nix
/shell.nix
.pioenvs
.piolibdeps
.pio

View File

@@ -11,7 +11,7 @@ ci:
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.13
rev: v0.12.5
hooks:
# Run the linter.
- id: ruff

View File

@@ -21,7 +21,6 @@ esphome/components/adc128s102/* @DeerMaximum
esphome/components/addressable_light/* @justfalter
esphome/components/ade7880/* @kpfleming
esphome/components/ade7953/* @angelnu
esphome/components/ade7953_base/* @angelnu
esphome/components/ade7953_i2c/* @angelnu
esphome/components/ade7953_spi/* @angelnu
esphome/components/ads1118/* @solomondg1
@@ -41,12 +40,11 @@ esphome/components/analog_threshold/* @ianchi
esphome/components/animation/* @syndlex
esphome/components/anova/* @buxtronix
esphome/components/apds9306/* @aodrenah
esphome/components/api/* @esphome/core
esphome/components/aqi/* @freekode @jasstrong @ximex
esphome/components/api/* @OttoWinter
esphome/components/as5600/* @ammmze
esphome/components/as5600/sensor/* @ammmze
esphome/components/as7341/* @mrgnr
esphome/components/async_tcp/* @esphome/core
esphome/components/async_tcp/* @OttoWinter
esphome/components/at581x/* @X-Ryl669
esphome/components/atc_mithermometer/* @ahpohl
esphome/components/atm90e26/* @danieltwagner
@@ -64,17 +62,14 @@ esphome/components/bedjet/fan/* @jhansche
esphome/components/bedjet/sensor/* @javawizard @jhansche
esphome/components/beken_spi_led_strip/* @Mat931
esphome/components/bh1750/* @OttoWinter
esphome/components/bh1900nux/* @B48D81EFCC
esphome/components/binary_sensor/* @esphome/core
esphome/components/bk72xx/* @kuba2k2
esphome/components/bl0906/* @athom-tech @jesserockz @tarontop
esphome/components/bl0939/* @ziceva
esphome/components/bl0940/* @dan-s-github @tobias-
esphome/components/bl0940/* @tobias-
esphome/components/bl0942/* @dbuezas @dwmw2
esphome/components/ble_client/* @buxtronix @clydebarrow
esphome/components/ble_nus/* @tomaszduda23
esphome/components/bluetooth_proxy/* @bdraco @jesserockz
esphome/components/bm8563/* @abmantis
esphome/components/bluetooth_proxy/* @jesserockz
esphome/components/bme280_base/* @esphome/core
esphome/components/bme280_spi/* @apbodrov
esphome/components/bme680_bsec/* @trvrnrth
@@ -91,15 +86,12 @@ esphome/components/bmp3xx_spi/* @latonita
esphome/components/bmp581/* @kahrendt
esphome/components/bp1658cj/* @Cossid
esphome/components/bp5758d/* @Cossid
esphome/components/bthome_mithermometer/* @nagyrobi
esphome/components/button/* @esphome/core
esphome/components/bytebuffer/* @clydebarrow
esphome/components/camera/* @bdraco @DT-art1
esphome/components/camera_encoder/* @DT-art1
esphome/components/camera/* @DT-art1 @bdraco
esphome/components/canbus/* @danielschramm @mvturnho
esphome/components/cap1188/* @mreditor97
esphome/components/captive_portal/* @esphome/core
esphome/components/cc1101/* @gabest11 @lygris
esphome/components/captive_portal/* @OttoWinter
esphome/components/ccs811/* @habbie
esphome/components/cd74hc4067/* @asoehlke
esphome/components/ch422g/* @clydebarrow @jesterret
@@ -126,7 +118,7 @@ esphome/components/dallas_temp/* @ssieb
esphome/components/daly_bms/* @s1lvi0
esphome/components/dashboard_import/* @esphome/core
esphome/components/datetime/* @jesserockz @rfdarter
esphome/components/debug/* @esphome/core
esphome/components/debug/* @OttoWinter
esphome/components/delonghi/* @grob6000
esphome/components/dfplayer/* @glmnet
esphome/components/dfrobot_sen0395/* @niklasweber
@@ -135,7 +127,7 @@ esphome/components/display_menu_base/* @numo68
esphome/components/dps310/* @kbx81
esphome/components/ds1307/* @badbadc0ffee
esphome/components/ds2484/* @mrk-its
esphome/components/dsmr/* @glmnet @PolarGoose @zuidwijk
esphome/components/dsmr/* @glmnet @zuidwijk
esphome/components/duty_time/* @dudanov
esphome/components/ee895/* @Stock-M
esphome/components/ektf2232/touchscreen/* @jesserockz
@@ -146,37 +138,33 @@ esphome/components/ens160_base/* @latonita @vincentscode
esphome/components/ens160_i2c/* @latonita
esphome/components/ens160_spi/* @latonita
esphome/components/ens210/* @itn3rd77
esphome/components/epaper_spi/* @esphome/core
esphome/components/es7210/* @kahrendt
esphome/components/es7243e/* @kbx81
esphome/components/es8156/* @kbx81
esphome/components/es8311/* @kahrendt @kroimon
esphome/components/es8388/* @P4uLT
esphome/components/esp32/* @esphome/core
esphome/components/esp32_ble/* @bdraco @jesserockz @Rapsssito
esphome/components/esp32_ble_client/* @bdraco @jesserockz
esphome/components/esp32_ble_server/* @clydebarrow @jesserockz @Rapsssito
esphome/components/esp32_ble_tracker/* @bdraco
esphome/components/esp32_ble/* @Rapsssito @jesserockz
esphome/components/esp32_ble_client/* @jesserockz
esphome/components/esp32_ble_server/* @Rapsssito @clydebarrow @jesserockz
esphome/components/esp32_camera_web_server/* @ayufan
esphome/components/esp32_can/* @Sympatron
esphome/components/esp32_hosted/* @swoboda1337
esphome/components/esp32_hosted/update/* @swoboda1337
esphome/components/esp32_improv/* @jesserockz
esphome/components/esp32_rmt/* @jesserockz
esphome/components/esp32_rmt_led_strip/* @jesserockz
esphome/components/esp8266/* @esphome/core
esphome/components/esp_ldo/* @clydebarrow
esphome/components/espnow/* @jesserockz
esphome/components/espnow/packet_transport/* @EasilyBoredEngineer
esphome/components/ethernet_info/* @gtjadsonsantos
esphome/components/event/* @nohat
esphome/components/event_emitter/* @Rapsssito
esphome/components/exposure_notifications/* @OttoWinter
esphome/components/ezo/* @ssieb
esphome/components/ezo_pmp/* @carlos-sarmiento
esphome/components/factory_reset/* @anatoly-savchenkov
esphome/components/fastled_base/* @OttoWinter
esphome/components/feedback/* @ianchi
esphome/components/fingerprint_grow/* @alexborro @loongyh @OnFreund
esphome/components/fingerprint_grow/* @OnFreund @alexborro @loongyh
esphome/components/font/* @clydebarrow @esphome/core
esphome/components/fs3000/* @kahrendt
esphome/components/ft5x06/* @clydebarrow
@@ -186,14 +174,13 @@ esphome/components/gdk101/* @Szewcson
esphome/components/gl_r01_i2c/* @pkejval
esphome/components/globals/* @esphome/core
esphome/components/gp2y1010au0f/* @zry98
esphome/components/gp8403/* @jesserockz @sebydocky
esphome/components/gp8403/* @jesserockz
esphome/components/gpio/* @esphome/core
esphome/components/gpio/one_wire/* @ssieb
esphome/components/gps/* @coogle @ximex
esphome/components/graph/* @synco
esphome/components/graphical_display_menu/* @MrMDavidson
esphome/components/gree/* @orestismers
esphome/components/gree/switch/* @nagyrobi
esphome/components/grove_gas_mc_v2/* @YorkshireIoT
esphome/components/grove_tb6612fng/* @max246
esphome/components/growatt_solar/* @leeuwte
@@ -208,17 +195,12 @@ esphome/components/havells_solar/* @sourabhjaiswal
esphome/components/hbridge/fan/* @WeekendWarrior
esphome/components/hbridge/light/* @DotNetDann
esphome/components/hbridge/switch/* @dwmw2
esphome/components/hc8/* @omartijn
esphome/components/hdc2010/* @optimusprimespace @ssieb
esphome/components/he60r/* @clydebarrow
esphome/components/heatpumpir/* @rob-deutsch
esphome/components/hitachi_ac424/* @sourabhjaiswal
esphome/components/hlk_fm22x/* @OnFreund
esphome/components/hlw8032/* @rici4kubicek
esphome/components/hm3301/* @freekode
esphome/components/hmac_md5/* @dwmw2
esphome/components/hmac_sha256/* @dwmw2
esphome/components/homeassistant/* @esphome/core @OttoWinter
esphome/components/homeassistant/* @OttoWinter @esphome/core
esphome/components/homeassistant/number/* @landonr
esphome/components/homeassistant/switch/* @Links2004
esphome/components/honeywell_hih_i2c/* @Benichou34
@@ -231,7 +213,6 @@ esphome/components/hte501/* @Stock-M
esphome/components/http_request/ota/* @oarcher
esphome/components/http_request/update/* @jesserockz
esphome/components/htu31d/* @betterengineering
esphome/components/hub75/* @stuartparmenter
esphome/components/hydreon_rgxx/* @functionpointer
esphome/components/hyt271/* @Philippe12
esphome/components/i2c/* @esphome/core
@@ -244,20 +225,18 @@ esphome/components/iaqcore/* @yozik04
esphome/components/ili9xxx/* @clydebarrow @nielsnl68
esphome/components/improv_base/* @esphome/core
esphome/components/improv_serial/* @esphome/core
esphome/components/ina226/* @latonita @Sergio303
esphome/components/ina226/* @Sergio303 @latonita
esphome/components/ina260/* @mreditor97
esphome/components/ina2xx_base/* @latonita
esphome/components/ina2xx_i2c/* @latonita
esphome/components/ina2xx_spi/* @latonita
esphome/components/infrared/* @kbx81
esphome/components/inkbird_ibsth1_mini/* @fkirill
esphome/components/inkplate/* @jesserockz @JosipKuci
esphome/components/inkplate6/* @jesserockz
esphome/components/integration/* @OttoWinter
esphome/components/internal_temperature/* @Mat931
esphome/components/interval/* @esphome/core
esphome/components/ir_rf_proxy/* @kbx81
esphome/components/jsn_sr04t/* @Mafus1
esphome/components/json/* @esphome/core
esphome/components/json/* @OttoWinter
esphome/components/kamstrup_kmp/* @cfeenstra1024
esphome/components/key_collector/* @ssieb
esphome/components/key_provider/* @ssieb
@@ -265,7 +244,6 @@ esphome/components/kuntze/* @ssieb
esphome/components/lc709203f/* @ilikecake
esphome/components/lcd_menu/* @numo68
esphome/components/ld2410/* @regevbr @sebcaps
esphome/components/ld2412/* @Rihan9
esphome/components/ld2420/* @descipher
esphome/components/ld2450/* @hareeshmu
esphome/components/ld24xx/* @kbx81
@@ -275,7 +253,6 @@ esphome/components/libretiny_pwm/* @kuba2k2
esphome/components/light/* @esphome/core
esphome/components/lightwaverf/* @max246
esphome/components/lilygo_t5_47/touchscreen/* @jesserockz
esphome/components/lm75b/* @beormund
esphome/components/ln882x/* @lamauny
esphome/components/lock/* @esphome/core
esphome/components/logger/* @esphome/core
@@ -296,14 +273,13 @@ esphome/components/max7219digit/* @rspaargaren
esphome/components/max9611/* @mckaymatthew
esphome/components/mcp23008/* @jesserockz
esphome/components/mcp23017/* @jesserockz
esphome/components/mcp23s08/* @jesserockz @SenexCrenshaw
esphome/components/mcp23s17/* @jesserockz @SenexCrenshaw
esphome/components/mcp23s08/* @SenexCrenshaw @jesserockz
esphome/components/mcp23s17/* @SenexCrenshaw @jesserockz
esphome/components/mcp23x08_base/* @jesserockz
esphome/components/mcp23x17_base/* @jesserockz
esphome/components/mcp23xxx_base/* @jesserockz
esphome/components/mcp2515/* @danielschramm @mvturnho
esphome/components/mcp3204/* @rsumner
esphome/components/mcp3221/* @philippderdiedas
esphome/components/mcp4461/* @p1ngb4ck
esphome/components/mcp4728/* @berfenger
esphome/components/mcp47a1/* @jesserockz
@@ -313,13 +289,12 @@ esphome/components/md5/* @esphome/core
esphome/components/mdns/* @esphome/core
esphome/components/media_player/* @jesserockz
esphome/components/micro_wake_word/* @jesserockz @kahrendt
esphome/components/micronova/* @edenhaus @jorre05
esphome/components/micronova/* @jorre05
esphome/components/microphone/* @jesserockz @kahrendt
esphome/components/mics_4514/* @jesserockz
esphome/components/midea/* @dudanov
esphome/components/midea_ir/* @dudanov
esphome/components/mipi_dsi/* @clydebarrow
esphome/components/mipi_rgb/* @clydebarrow
esphome/components/mipi_spi/* @clydebarrow
esphome/components/mitsubishi/* @RubyBailey
esphome/components/mixer/speaker/* @kahrendt
@@ -363,7 +338,7 @@ esphome/components/ota/* @esphome/core
esphome/components/output/* @esphome/core
esphome/components/packet_transport/* @clydebarrow
esphome/components/pca6416a/* @Mat931
esphome/components/pca9554/* @bdraco @clydebarrow @hwstar
esphome/components/pca9554/* @clydebarrow @hwstar
esphome/components/pcf85063/* @brogon
esphome/components/pcf8563/* @KoenBreeman
esphome/components/pi4ioe5v6408/* @jesserockz
@@ -374,9 +349,9 @@ esphome/components/pm2005/* @andrewjswan
esphome/components/pmsa003i/* @sjtrny
esphome/components/pmsx003/* @ximex
esphome/components/pmwcs3/* @SeByDocKy
esphome/components/pn532/* @jesserockz @OttoWinter
esphome/components/pn532_i2c/* @jesserockz @OttoWinter
esphome/components/pn532_spi/* @jesserockz @OttoWinter
esphome/components/pn532/* @OttoWinter @jesserockz
esphome/components/pn532_i2c/* @OttoWinter @jesserockz
esphome/components/pn532_spi/* @OttoWinter @jesserockz
esphome/components/pn7150/* @jesserockz @kbx81
esphome/components/pn7150_i2c/* @jesserockz @kbx81
esphome/components/pn7160/* @jesserockz @kbx81
@@ -385,7 +360,7 @@ esphome/components/pn7160_spi/* @jesserockz @kbx81
esphome/components/power_supply/* @esphome/core
esphome/components/preferences/* @esphome/core
esphome/components/psram/* @esphome/core
esphome/components/pulse_meter/* @cstaahl @stevebaxter @TrentHouliston
esphome/components/pulse_meter/* @TrentHouliston @cstaahl @stevebaxter
esphome/components/pvvx_mithermometer/* @pasiz
esphome/components/pylontech/* @functionpointer
esphome/components/qmp6988/* @andrewpc
@@ -397,7 +372,6 @@ esphome/components/radon_eye_rd200/* @jeffeb3
esphome/components/rc522/* @glmnet
esphome/components/rc522_i2c/* @glmnet
esphome/components/rc522_spi/* @glmnet
esphome/components/rd03d/* @jasstrong
esphome/components/resampler/speaker/* @kahrendt
esphome/components/restart/* @esphome/core
esphome/components/rf_bridge/* @jesserockz
@@ -409,7 +383,6 @@ esphome/components/rpi_dpi_rgb/* @clydebarrow
esphome/components/rtl87xx/* @kuba2k2
esphome/components/rtttl/* @glmnet
esphome/components/runtime_stats/* @bdraco
esphome/components/rx8130/* @beormund
esphome/components/safe_mode/* @jsuanet @kbx81 @paulmonigatti
esphome/components/scd4x/* @martgras @sjtrny
esphome/components/script/* @esphome/core
@@ -428,8 +401,7 @@ esphome/components/sensirion_common/* @martgras
esphome/components/sensor/* @esphome/core
esphome/components/sfa30/* @ghsensdev
esphome/components/sgp40/* @SenexCrenshaw
esphome/components/sgp4x/* @martgras @SenexCrenshaw
esphome/components/sha256/* @esphome/core
esphome/components/sgp4x/* @SenexCrenshaw @martgras
esphome/components/shelly_dimmer/* @edge90 @rnauber
esphome/components/sht3xd/* @mrtoy-me
esphome/components/sht4x/* @sjtrny
@@ -451,7 +423,6 @@ esphome/components/speaker/media_player/* @kahrendt @synesthesiam
esphome/components/spi/* @clydebarrow @esphome/core
esphome/components/spi_device/* @clydebarrow
esphome/components/spi_led_strip/* @clydebarrow
esphome/components/split_buffer/* @jesserockz
esphome/components/sprinkler/* @kbx81
esphome/components/sps30/* @martgras
esphome/components/ssd1322_base/* @kbx81
@@ -473,7 +444,6 @@ esphome/components/st7735/* @SenexCrenshaw
esphome/components/st7789v/* @kbx81
esphome/components/st7920/* @marsjan155
esphome/components/statsd/* @Links2004
esphome/components/stts22h/* @B48D81EFCC
esphome/components/substitutions/* @esphome/core
esphome/components/sun/* @OttoWinter
esphome/components/sun_gtil2/* @Mat931
@@ -495,16 +465,14 @@ esphome/components/template/datetime/* @rfdarter
esphome/components/template/event/* @nohat
esphome/components/template/fan/* @ssieb
esphome/components/text/* @mauritskorse
esphome/components/thermopro_ble/* @sittner
esphome/components/thermostat/* @kbx81
esphome/components/time/* @esphome/core
esphome/components/tinyusb/* @kbx81
esphome/components/time/* @OttoWinter
esphome/components/tlc5947/* @rnauber
esphome/components/tlc5971/* @IJIJI
esphome/components/tm1621/* @Philippe12
esphome/components/tm1637/* @glmnet
esphome/components/tm1638/* @skykingjwc
esphome/components/tm1651/* @mrtoy-me
esphome/components/tm1651/* @freekode
esphome/components/tmp102/* @timsavage
esphome/components/tmp1075/* @sybrenstuvel
esphome/components/tmp117/* @Azimath
@@ -523,7 +491,6 @@ esphome/components/tuya/switch/* @jesserockz
esphome/components/tuya/text_sensor/* @dentra
esphome/components/uart/* @esphome/core
esphome/components/uart/button/* @ssieb
esphome/components/uart/event/* @eoasmxd
esphome/components/uart/packet_transport/* @clydebarrow
esphome/components/udp/* @clydebarrow
esphome/components/ufire_ec/* @pvizeli
@@ -531,7 +498,6 @@ esphome/components/ufire_ise/* @pvizeli
esphome/components/ultrasonic/* @OttoWinter
esphome/components/update/* @jesserockz
esphome/components/uponor_smatrix/* @kroimon
esphome/components/usb_cdc_acm/* @kbx81
esphome/components/usb_host/* @clydebarrow
esphome/components/usb_uart/* @clydebarrow
esphome/components/valve/* @esphome/core
@@ -542,10 +508,9 @@ esphome/components/version/* @esphome/core
esphome/components/voice_assistant/* @jesserockz @kahrendt
esphome/components/wake_on_lan/* @clydebarrow @willwill2will54
esphome/components/watchdog/* @oarcher
esphome/components/water_heater/* @dhoeben
esphome/components/waveshare_epaper/* @clydebarrow
esphome/components/web_server/ota/* @esphome/core
esphome/components/web_server_base/* @esphome/core
esphome/components/web_server_base/* @OttoWinter
esphome/components/web_server_idf/* @dentra
esphome/components/weikai/* @DrCoolZic
esphome/components/weikai_i2c/* @DrCoolZic
@@ -563,7 +528,6 @@ esphome/components/wk2204_spi/* @DrCoolZic
esphome/components/wk2212_i2c/* @DrCoolZic
esphome/components/wk2212_spi/* @DrCoolZic
esphome/components/wl_134/* @hobbypunk90
esphome/components/wts01/* @alepee
esphome/components/x9c/* @EtienneMD
esphome/components/xgzp68xx/* @gcormier
esphome/components/xiaomi_hhccjcy10/* @fariouche
@@ -578,6 +542,4 @@ esphome/components/xpt2046/touchscreen/* @nielsnl68 @numo68
esphome/components/xxtea/* @clydebarrow
esphome/components/zephyr/* @tomaszduda23
esphome/components/zhlt01/* @cfeenstra1024
esphome/components/zigbee/* @tomaszduda23
esphome/components/zio_ultrasonic/* @kahrendt
esphome/components/zwave_proxy/* @kbx81

View File

@@ -2,7 +2,7 @@
We welcome contributions to the ESPHome suite of code and documentation!
Please read our [contributing guide](https://developers.esphome.io/contributing/code/) if you wish to contribute to the
Please read our [contributing guide](https://esphome.io/guides/contributing.html) if you wish to contribute to the
project and be sure to join us on [Discord](https://discord.gg/KhAMKrd).
**See also:**

View File

@@ -48,7 +48,7 @@ PROJECT_NAME = ESPHome
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = 2026.2.0-dev
PROJECT_NUMBER = 2025.8.0-dev
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a

View File

@@ -1,7 +1,6 @@
include LICENSE
include README.md
include requirements.txt
recursive-include esphome *.yaml
recursive-include esphome *.cpp *.h *.tcc *.c
recursive-include esphome *.py.script
recursive-include esphome LICENSE.txt

View File

@@ -2,8 +2,8 @@
<a href="https://esphome.io/">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://media.esphome.io/logo/logo-text-on-dark.svg">
<img src="https://media.esphome.io/logo/logo-text-on-light.svg" alt="ESPHome Logo">
<source media="(prefers-color-scheme: dark)" srcset="https://esphome.io/_static/logo-text-on-dark.svg", alt="ESPHome Logo">
<img src="https://esphome.io/_static/logo-text-on-light.svg" alt="ESPHome Logo">
</picture>
</a>

View File

@@ -11,16 +11,6 @@ FROM base-source-${BUILD_TYPE} AS base
RUN git config --system --add safe.directory "*"
# Install build tools for Python packages that require compilation
# (e.g., ruamel.yaml.clibz used by ESP-IDF's idf-component-manager)
RUN if command -v apk > /dev/null; then \
apk add --no-cache build-base; \
else \
apt-get update \
&& apt-get install -y --no-install-recommends build-essential \
&& rm -rf /var/lib/apt/lists/*; \
fi
ENV PIP_DISABLE_PIP_VERSION_CHECK=1
RUN pip install --no-cache-dir -U pip uv==0.6.14

View File

@@ -90,7 +90,7 @@ def main():
def run_command(*cmd, ignore_error: bool = False):
print(f"$ {shlex.join(list(cmd))}")
if not args.dry_run:
rc = subprocess.call(list(cmd), close_fds=False)
rc = subprocess.call(list(cmd))
if rc != 0 and not ignore_error:
print("Command failed")
sys.exit(1)

File diff suppressed because it is too large Load Diff

View File

@@ -1,142 +0,0 @@
"""Address cache for DNS and mDNS lookups."""
from __future__ import annotations
import logging
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from collections.abc import Iterable
_LOGGER = logging.getLogger(__name__)
def normalize_hostname(hostname: str) -> str:
"""Normalize hostname for cache lookups.
Removes trailing dots and converts to lowercase.
"""
return hostname.rstrip(".").lower()
class AddressCache:
"""Cache for DNS and mDNS address lookups.
This cache stores pre-resolved addresses from command-line arguments
to avoid slow DNS/mDNS lookups during builds.
"""
def __init__(
self,
mdns_cache: dict[str, list[str]] | None = None,
dns_cache: dict[str, list[str]] | None = None,
) -> None:
"""Initialize the address cache.
Args:
mdns_cache: Pre-populated mDNS addresses (hostname -> IPs)
dns_cache: Pre-populated DNS addresses (hostname -> IPs)
"""
self.mdns_cache = mdns_cache or {}
self.dns_cache = dns_cache or {}
def _get_cached_addresses(
self, hostname: str, cache: dict[str, list[str]], cache_type: str
) -> list[str] | None:
"""Get cached addresses from a specific cache.
Args:
hostname: The hostname to look up
cache: The cache dictionary to check
cache_type: Type of cache for logging ("mDNS" or "DNS")
Returns:
List of IP addresses if found in cache, None otherwise
"""
normalized = normalize_hostname(hostname)
if addresses := cache.get(normalized):
_LOGGER.debug("Using %s cache for %s: %s", cache_type, hostname, addresses)
return addresses
return None
def get_mdns_addresses(self, hostname: str) -> list[str] | None:
"""Get cached mDNS addresses for a hostname.
Args:
hostname: The hostname to look up (should end with .local)
Returns:
List of IP addresses if found in cache, None otherwise
"""
return self._get_cached_addresses(hostname, self.mdns_cache, "mDNS")
def get_dns_addresses(self, hostname: str) -> list[str] | None:
"""Get cached DNS addresses for a hostname.
Args:
hostname: The hostname to look up
Returns:
List of IP addresses if found in cache, None otherwise
"""
return self._get_cached_addresses(hostname, self.dns_cache, "DNS")
def get_addresses(self, hostname: str) -> list[str] | None:
"""Get cached addresses for a hostname.
Checks mDNS cache for .local domains, DNS cache otherwise.
Args:
hostname: The hostname to look up
Returns:
List of IP addresses if found in cache, None otherwise
"""
normalized = normalize_hostname(hostname)
if normalized.endswith(".local"):
return self.get_mdns_addresses(hostname)
return self.get_dns_addresses(hostname)
def has_cache(self) -> bool:
"""Check if any cache entries exist."""
return bool(self.mdns_cache or self.dns_cache)
@classmethod
def from_cli_args(
cls, mdns_args: Iterable[str], dns_args: Iterable[str]
) -> AddressCache:
"""Create cache from command-line arguments.
Args:
mdns_args: List of mDNS cache entries like ['host=ip1,ip2']
dns_args: List of DNS cache entries like ['host=ip1,ip2']
Returns:
Configured AddressCache instance
"""
mdns_cache = cls._parse_cache_args(mdns_args)
dns_cache = cls._parse_cache_args(dns_args)
return cls(mdns_cache=mdns_cache, dns_cache=dns_cache)
@staticmethod
def _parse_cache_args(cache_args: Iterable[str]) -> dict[str, list[str]]:
"""Parse cache arguments into a dictionary.
Args:
cache_args: List of cache mappings like ['host1=ip1,ip2', 'host2=ip3']
Returns:
Dictionary mapping normalized hostnames to list of IP addresses
"""
cache: dict[str, list[str]] = {}
for arg in cache_args:
if "=" not in arg:
_LOGGER.warning(
"Invalid cache format: %s (expected 'hostname=ip1,ip2')", arg
)
continue
hostname, ips = arg.split("=", 1)
# Normalize hostname for consistent lookups
normalized = normalize_hostname(hostname)
cache[normalized] = [ip.strip() for ip in ips.split(",")]
return cache

View File

@@ -1,620 +0,0 @@
"""Memory usage analyzer for ESPHome compiled binaries."""
from collections import defaultdict
from dataclasses import dataclass, field
import logging
from pathlib import Path
import re
import subprocess
from typing import TYPE_CHECKING
from .const import (
CORE_SUBCATEGORY_PATTERNS,
DEMANGLED_PATTERNS,
ESPHOME_COMPONENT_PATTERN,
SECTION_TO_ATTR,
SYMBOL_PATTERNS,
)
from .demangle import batch_demangle
from .helpers import (
get_component_class_patterns,
get_esphome_components,
map_section_name,
parse_symbol_line,
)
from .toolchain import find_tool, resolve_tool_path, run_tool
if TYPE_CHECKING:
from esphome.platformio_api import IDEData
_LOGGER = logging.getLogger(__name__)
# C++ runtime patterns for categorization
_CPP_RUNTIME_PATTERNS = frozenset(["vtable", "typeinfo", "thunk"])
# libc printf/scanf family base names (used to detect variants like _printf_r, vfprintf, etc.)
_LIBC_PRINTF_SCANF_FAMILY = frozenset(["printf", "fprintf", "sprintf", "scanf"])
# Regex pattern for parsing readelf section headers
# Format: [ #] name type addr off size
_READELF_SECTION_PATTERN = re.compile(
r"\s*\[\s*\d+\]\s+([\.\w]+)\s+\w+\s+[\da-fA-F]+\s+[\da-fA-F]+\s+([\da-fA-F]+)"
)
# Component category prefixes
_COMPONENT_PREFIX_ESPHOME = "[esphome]"
_COMPONENT_PREFIX_EXTERNAL = "[external]"
_COMPONENT_CORE = f"{_COMPONENT_PREFIX_ESPHOME}core"
_COMPONENT_API = f"{_COMPONENT_PREFIX_ESPHOME}api"
# C++ namespace prefixes
_NAMESPACE_ESPHOME = "esphome::"
_NAMESPACE_STD = "std::"
# Type alias for symbol information: (symbol_name, size, component)
SymbolInfoType = tuple[str, int, str]
# RAM sections - symbols in these sections consume RAM
RAM_SECTIONS = frozenset([".data", ".bss"])
@dataclass
class MemorySection:
"""Represents a memory section with its symbols."""
name: str
symbols: list[SymbolInfoType] = field(default_factory=list)
total_size: int = 0 # Actual section size from ELF headers
symbol_size: int = 0 # Sum of symbol sizes (may be less than total_size)
@dataclass
class SDKSymbol:
"""Represents a symbol from an SDK library that's not in the ELF symbol table."""
name: str
size: int
library: str # Name of the .a file (e.g., "libpp.a")
section: str # ".bss" or ".data"
is_local: bool # True if static/local symbol (lowercase in nm output)
demangled: str = "" # Demangled name (populated after analysis)
@dataclass
class ComponentMemory:
"""Tracks memory usage for a component."""
name: str
text_size: int = 0 # Code in flash
rodata_size: int = 0 # Read-only data in flash
data_size: int = 0 # Initialized data (flash + ram)
bss_size: int = 0 # Uninitialized data (ram only)
symbol_count: int = 0
@property
def flash_total(self) -> int:
"""Total flash usage (text + rodata + data)."""
return self.text_size + self.rodata_size + self.data_size
@property
def ram_total(self) -> int:
"""Total RAM usage (data + bss)."""
return self.data_size + self.bss_size
class MemoryAnalyzer:
"""Analyzes memory usage from ELF files."""
def __init__(
self,
elf_path: str,
objdump_path: str | None = None,
readelf_path: str | None = None,
external_components: set[str] | None = None,
idedata: "IDEData | None" = None,
) -> None:
"""Initialize memory analyzer.
Args:
elf_path: Path to ELF file to analyze
objdump_path: Path to objdump binary (auto-detected from idedata if not provided)
readelf_path: Path to readelf binary (auto-detected from idedata if not provided)
external_components: Set of external component names
idedata: Optional PlatformIO IDEData object to auto-detect toolchain paths
"""
self.elf_path = Path(elf_path)
if not self.elf_path.exists():
raise FileNotFoundError(f"ELF file not found: {elf_path}")
# Auto-detect toolchain paths from idedata if not provided
if idedata is not None and (objdump_path is None or readelf_path is None):
objdump_path = objdump_path or idedata.objdump_path
readelf_path = readelf_path or idedata.readelf_path
_LOGGER.debug("Using toolchain paths from PlatformIO idedata")
# Validate paths exist, fall back to find_tool if they don't
# This handles cases like Zephyr where cc_path doesn't include full path
# and the toolchain prefix may differ (e.g., arm-zephyr-eabi- vs arm-none-eabi-)
objdump_path = resolve_tool_path("objdump", objdump_path, objdump_path)
readelf_path = resolve_tool_path("readelf", readelf_path, objdump_path)
self.objdump_path = objdump_path or "objdump"
self.readelf_path = readelf_path or "readelf"
self.external_components = external_components or set()
self._idedata = idedata
# Derive nm path from objdump path using shared toolchain utility
self.nm_path = find_tool("nm", self.objdump_path)
self.sections: dict[str, MemorySection] = {}
self.components: dict[str, ComponentMemory] = defaultdict(
lambda: ComponentMemory("")
)
self._demangle_cache: dict[str, str] = {}
self._uncategorized_symbols: list[tuple[str, str, int]] = []
self._esphome_core_symbols: list[
tuple[str, str, int]
] = [] # Track core symbols
# Track symbols for all components: (symbol_name, demangled, size, section)
self._component_symbols: dict[str, list[tuple[str, str, int, str]]] = (
defaultdict(list)
)
# Track RAM symbols separately for detailed analysis: (symbol_name, demangled, size, section)
self._ram_symbols: dict[str, list[tuple[str, str, int, str]]] = defaultdict(
list
)
# Track ELF symbol names for SDK cross-reference
self._elf_symbol_names: set[str] = set()
# SDK symbols not in ELF (static/local symbols from closed-source libs)
self._sdk_symbols: list[SDKSymbol] = []
def analyze(self) -> dict[str, ComponentMemory]:
"""Analyze the ELF file and return component memory usage."""
self._parse_sections()
self._parse_symbols()
self._categorize_symbols()
self._analyze_sdk_libraries()
return dict(self.components)
def _parse_sections(self) -> None:
"""Parse section headers from ELF file."""
result = subprocess.run(
[self.readelf_path, "-S", str(self.elf_path)],
capture_output=True,
text=True,
check=True,
)
# Parse section headers
for line in result.stdout.splitlines():
# Look for section entries
if not (match := _READELF_SECTION_PATTERN.match(line)):
continue
section_name = match.group(1)
size_hex = match.group(2)
size = int(size_hex, 16)
# Map to standard section name
mapped_section = map_section_name(section_name)
if not mapped_section:
continue
if mapped_section not in self.sections:
self.sections[mapped_section] = MemorySection(mapped_section)
self.sections[mapped_section].total_size += size
def _parse_symbols(self) -> None:
"""Parse symbols from ELF file."""
result = subprocess.run(
[self.objdump_path, "-t", str(self.elf_path)],
capture_output=True,
text=True,
check=True,
)
# Track seen addresses to avoid duplicates
seen_addresses: set[str] = set()
for line in result.stdout.splitlines():
if not (symbol_info := parse_symbol_line(line)):
continue
section, name, size, address = symbol_info
# Skip duplicate symbols at the same address (e.g., C1/C2 constructors)
if address in seen_addresses or section not in self.sections:
continue
self.sections[section].symbols.append((name, size, ""))
self.sections[section].symbol_size += size
self._elf_symbol_names.add(name)
seen_addresses.add(address)
def _categorize_symbols(self) -> None:
"""Categorize symbols by component."""
# First, collect all unique symbol names for batch demangling
all_symbols = {
symbol_name
for section in self.sections.values()
for symbol_name, _, _ in section.symbols
}
# Batch demangle all symbols at once
self._batch_demangle_symbols(list(all_symbols))
# Now categorize with cached demangled names
for section_name, section in self.sections.items():
for symbol_name, size, _ in section.symbols:
component = self._identify_component(symbol_name)
if component not in self.components:
self.components[component] = ComponentMemory(component)
comp_mem = self.components[component]
comp_mem.symbol_count += 1
# Update the appropriate size attribute based on section
if attr_name := SECTION_TO_ATTR.get(section_name):
setattr(comp_mem, attr_name, getattr(comp_mem, attr_name) + size)
# Track uncategorized symbols
if component == "other" and size > 0:
demangled = self._demangle_symbol(symbol_name)
self._uncategorized_symbols.append((symbol_name, demangled, size))
# Track ESPHome core symbols for detailed analysis
if component == _COMPONENT_CORE and size > 0:
demangled = self._demangle_symbol(symbol_name)
self._esphome_core_symbols.append((symbol_name, demangled, size))
# Track all component symbols for detailed analysis
if size > 0:
demangled = self._demangle_symbol(symbol_name)
self._component_symbols[component].append(
(symbol_name, demangled, size, section_name)
)
# Track RAM symbols separately for detailed RAM analysis
if section_name in RAM_SECTIONS:
self._ram_symbols[component].append(
(symbol_name, demangled, size, section_name)
)
def _identify_component(self, symbol_name: str) -> str:
"""Identify which component a symbol belongs to."""
# Demangle C++ names if needed
demangled = self._demangle_symbol(symbol_name)
# Check for special component classes first (before namespace pattern)
# This handles cases like esphome::ESPHomeOTAComponent which should map to ota
if _NAMESPACE_ESPHOME in demangled:
# Check for special component classes that include component name in the class
# For example: esphome::ESPHomeOTAComponent -> ota component
for component_name in get_esphome_components():
patterns = get_component_class_patterns(component_name)
if any(pattern in demangled for pattern in patterns):
return f"{_COMPONENT_PREFIX_ESPHOME}{component_name}"
# Check for ESPHome component namespaces
match = ESPHOME_COMPONENT_PATTERN.search(demangled)
if match:
component_name = match.group(1)
# Strip trailing underscore if present (e.g., switch_ -> switch)
component_name = component_name.rstrip("_")
# Check if this is an actual component in the components directory
if component_name in get_esphome_components():
return f"{_COMPONENT_PREFIX_ESPHOME}{component_name}"
# Check if this is a known external component from the config
if component_name in self.external_components:
return f"{_COMPONENT_PREFIX_EXTERNAL}{component_name}"
# Everything else in esphome:: namespace is core
return _COMPONENT_CORE
# Check for esphome core namespace (no component namespace)
if _NAMESPACE_ESPHOME in demangled:
# If no component match found, it's core
return _COMPONENT_CORE
# Check against symbol patterns
for component, patterns in SYMBOL_PATTERNS.items():
if any(pattern in symbol_name for pattern in patterns):
return component
# Check against demangled patterns
for component, patterns in DEMANGLED_PATTERNS.items():
if any(pattern in demangled for pattern in patterns):
return component
# Special cases that need more complex logic
# Check if spi_flash vs spi_driver
if "spi_" in symbol_name or "SPI" in symbol_name:
return "spi_flash" if "spi_flash" in symbol_name else "spi_driver"
# libc special printf variants
if (
symbol_name.startswith("_")
and symbol_name[1:].replace("_r", "").replace("v", "").replace("s", "")
in _LIBC_PRINTF_SCANF_FAMILY
):
return "libc"
# Track uncategorized symbols for analysis
return "other"
def _batch_demangle_symbols(self, symbols: list[str]) -> None:
"""Batch demangle C++ symbol names for efficiency."""
if not symbols:
return
_LOGGER.info("Demangling %d symbols", len(symbols))
self._demangle_cache = batch_demangle(symbols, objdump_path=self.objdump_path)
_LOGGER.info("Successfully demangled %d symbols", len(self._demangle_cache))
def _demangle_symbol(self, symbol: str) -> str:
"""Get demangled C++ symbol name from cache."""
return self._demangle_cache.get(symbol, symbol)
def _categorize_esphome_core_symbol(self, demangled: str) -> str:
"""Categorize ESPHome core symbols into subcategories."""
# Special patterns that need to be checked separately
if any(pattern in demangled for pattern in _CPP_RUNTIME_PATTERNS):
return "C++ Runtime (vtables/RTTI)"
if demangled.startswith(_NAMESPACE_STD):
return "C++ STL"
# Check against patterns from const.py
for category, patterns in CORE_SUBCATEGORY_PATTERNS.items():
if any(pattern in demangled for pattern in patterns):
return category
return "Other Core"
def get_unattributed_ram(self) -> tuple[int, int, int]:
"""Get unattributed RAM sizes (SDK/framework overhead).
Returns:
Tuple of (unattributed_bss, unattributed_data, total_unattributed)
These are bytes in RAM sections that have no corresponding symbols.
"""
bss_section = self.sections.get(".bss")
data_section = self.sections.get(".data")
unattributed_bss = 0
unattributed_data = 0
if bss_section:
unattributed_bss = max(0, bss_section.total_size - bss_section.symbol_size)
if data_section:
unattributed_data = max(
0, data_section.total_size - data_section.symbol_size
)
return unattributed_bss, unattributed_data, unattributed_bss + unattributed_data
def _find_sdk_library_dirs(self) -> list[Path]:
"""Find SDK library directories based on platform.
Returns:
List of paths to SDK library directories containing .a files.
"""
sdk_dirs: list[Path] = []
if self._idedata is None:
return sdk_dirs
# Get the CC path to determine the framework location
cc_path = getattr(self._idedata, "cc_path", None)
if not cc_path:
return sdk_dirs
cc_path = Path(cc_path)
# For ESP8266 Arduino framework
# CC is like: ~/.platformio/packages/toolchain-xtensa/bin/xtensa-lx106-elf-gcc
# SDK libs are in: ~/.platformio/packages/framework-arduinoespressif8266/tools/sdk/lib/
if "xtensa-lx106" in str(cc_path):
platformio_dir = cc_path.parent.parent.parent
esp8266_sdk = (
platformio_dir
/ "framework-arduinoespressif8266"
/ "tools"
/ "sdk"
/ "lib"
)
if esp8266_sdk.exists():
sdk_dirs.append(esp8266_sdk)
# Also check for NONOSDK subdirectories (closed-source libs)
sdk_dirs.extend(
subdir
for subdir in esp8266_sdk.iterdir()
if subdir.is_dir() and subdir.name.startswith("NONOSDK")
)
# For ESP32 IDF framework
# CC is like: ~/.platformio/packages/toolchain-xtensa-esp-elf/bin/xtensa-esp32-elf-gcc
# or: ~/.platformio/packages/toolchain-riscv32-esp/bin/riscv32-esp-elf-gcc
elif "xtensa-esp" in str(cc_path) or "riscv32-esp" in str(cc_path):
# Detect ESP32 variant from CC path or defines
variant = self._detect_esp32_variant()
if variant:
platformio_dir = cc_path.parent.parent.parent
espidf_dir = platformio_dir / "framework-espidf" / "components"
if espidf_dir.exists():
# Find all directories named after the variant that contain .a files
# This handles various ESP-IDF library layouts:
# - components/*/lib/<variant>/
# - components/*/<variant>/
# - components/*/lib/lib/<variant>/
# - components/*/*/lib_*/<variant>/
sdk_dirs.extend(
variant_dir
for variant_dir in espidf_dir.rglob(variant)
if variant_dir.is_dir() and any(variant_dir.glob("*.a"))
)
return sdk_dirs
def _detect_esp32_variant(self) -> str | None:
"""Detect ESP32 variant from idedata defines.
Returns:
Variant string like 'esp32', 'esp32s2', 'esp32c3', etc. or None.
"""
if self._idedata is None:
return None
defines = getattr(self._idedata, "defines", [])
if not defines:
return None
# ESPHome always adds USE_ESP32_VARIANT_xxx defines
variant_prefix = "USE_ESP32_VARIANT_"
for define in defines:
if define.startswith(variant_prefix):
# Extract variant name and convert to lowercase
# USE_ESP32_VARIANT_ESP32 -> esp32
# USE_ESP32_VARIANT_ESP32S3 -> esp32s3
return define[len(variant_prefix) :].lower()
return None
def _parse_sdk_library(
self, lib_path: Path
) -> tuple[list[tuple[str, int, str, bool]], set[str]]:
"""Parse a single SDK library for symbols.
Args:
lib_path: Path to the .a library file
Returns:
Tuple of:
- List of BSS/DATA symbols: (symbol_name, size, section, is_local)
- Set of global BSS/DATA symbol names (for checking if RAM is linked)
"""
ram_symbols: list[tuple[str, int, str, bool]] = []
global_ram_symbols: set[str] = set()
result = run_tool([self.nm_path, "--size-sort", str(lib_path)], timeout=10)
if result is None:
return ram_symbols, global_ram_symbols
for line in result.stdout.splitlines():
parts = line.split()
if len(parts) < 3:
continue
try:
size = int(parts[0], 16)
sym_type = parts[1]
name = parts[2]
# Only collect BSS (b/B) and DATA (d/D) for RAM analysis
if sym_type in ("b", "B"):
section = ".bss"
is_local = sym_type == "b"
ram_symbols.append((name, size, section, is_local))
# Track global RAM symbols (B/D) for linking check
if sym_type == "B":
global_ram_symbols.add(name)
elif sym_type in ("d", "D"):
section = ".data"
is_local = sym_type == "d"
ram_symbols.append((name, size, section, is_local))
if sym_type == "D":
global_ram_symbols.add(name)
except (ValueError, IndexError):
continue
return ram_symbols, global_ram_symbols
def _analyze_sdk_libraries(self) -> None:
"""Analyze SDK libraries to find symbols not in the ELF.
This finds static/local symbols from closed-source SDK libraries
that consume RAM but don't appear in the final ELF symbol table.
Only includes symbols from libraries that have RAM actually linked
(at least one global BSS/DATA symbol in the ELF).
"""
sdk_dirs = self._find_sdk_library_dirs()
if not sdk_dirs:
_LOGGER.debug("No SDK library directories found")
return
_LOGGER.debug("Analyzing SDK libraries in %d directories", len(sdk_dirs))
# Track seen symbols to avoid duplicates from multiple SDK versions
seen_symbols: set[str] = set()
for sdk_dir in sdk_dirs:
for lib_path in sorted(sdk_dir.glob("*.a")):
lib_name = lib_path.name
ram_symbols, global_ram_symbols = self._parse_sdk_library(lib_path)
# Check if this library's RAM is actually linked by seeing if any
# of its global BSS/DATA symbols appear in the ELF
if not global_ram_symbols & self._elf_symbol_names:
# No RAM from this library is in the ELF - skip it
continue
for name, size, section, is_local in ram_symbols:
# Skip if already in ELF or already seen from another lib
if name in self._elf_symbol_names or name in seen_symbols:
continue
# Only track symbols with non-zero size
if size > 0:
self._sdk_symbols.append(
SDKSymbol(
name=name,
size=size,
library=lib_name,
section=section,
is_local=is_local,
)
)
seen_symbols.add(name)
# Demangle SDK symbols for better readability
if self._sdk_symbols:
sdk_names = [sym.name for sym in self._sdk_symbols]
demangled_map = batch_demangle(sdk_names, objdump_path=self.objdump_path)
for sym in self._sdk_symbols:
sym.demangled = demangled_map.get(sym.name, sym.name)
# Sort by size descending for reporting
self._sdk_symbols.sort(key=lambda s: s.size, reverse=True)
total_sdk_ram = sum(s.size for s in self._sdk_symbols)
_LOGGER.debug(
"Found %d SDK symbols not in ELF, totaling %d bytes",
len(self._sdk_symbols),
total_sdk_ram,
)
def get_sdk_ram_symbols(self) -> list[SDKSymbol]:
"""Get SDK symbols that consume RAM but aren't in the ELF symbol table.
Returns:
List of SDKSymbol objects sorted by size descending.
"""
return self._sdk_symbols
def get_sdk_ram_by_library(self) -> dict[str, list[SDKSymbol]]:
"""Get SDK RAM symbols grouped by library.
Returns:
Dictionary mapping library name to list of symbols.
"""
by_lib: dict[str, list[SDKSymbol]] = defaultdict(list)
for sym in self._sdk_symbols:
by_lib[sym.library].append(sym)
return dict(by_lib)
if __name__ == "__main__":
from .cli import main
main()

View File

@@ -1,6 +0,0 @@
"""Main entry point for running the memory analyzer as a module."""
from .cli import main
if __name__ == "__main__":
main()

View File

@@ -1,576 +0,0 @@
"""CLI interface for memory analysis with report generation."""
from __future__ import annotations
from collections import defaultdict
from collections.abc import Callable
import sys
from typing import TYPE_CHECKING
from . import (
_COMPONENT_API,
_COMPONENT_CORE,
_COMPONENT_PREFIX_ESPHOME,
_COMPONENT_PREFIX_EXTERNAL,
RAM_SECTIONS,
MemoryAnalyzer,
)
if TYPE_CHECKING:
from . import ComponentMemory
class MemoryAnalyzerCLI(MemoryAnalyzer):
"""Memory analyzer with CLI-specific report generation."""
# Symbol size threshold for detailed analysis
SYMBOL_SIZE_THRESHOLD: int = (
100 # Show symbols larger than this in detailed analysis
)
# Lower threshold for RAM symbols (RAM is more constrained)
RAM_SYMBOL_SIZE_THRESHOLD: int = 24
# Column width constants
COL_COMPONENT: int = 29
COL_FLASH_TEXT: int = 14
COL_FLASH_DATA: int = 14
COL_RAM_DATA: int = 12
COL_RAM_BSS: int = 12
COL_TOTAL_FLASH: int = 15
COL_TOTAL_RAM: int = 12
COL_SEPARATOR: int = 3 # " | "
# Core analysis column widths
COL_CORE_SUBCATEGORY: int = 30
COL_CORE_SIZE: int = 12
COL_CORE_COUNT: int = 6
COL_CORE_PERCENT: int = 10
# Calculate table width once at class level
TABLE_WIDTH: int = (
COL_COMPONENT
+ COL_SEPARATOR
+ COL_FLASH_TEXT
+ COL_SEPARATOR
+ COL_FLASH_DATA
+ COL_SEPARATOR
+ COL_RAM_DATA
+ COL_SEPARATOR
+ COL_RAM_BSS
+ COL_SEPARATOR
+ COL_TOTAL_FLASH
+ COL_SEPARATOR
+ COL_TOTAL_RAM
)
@staticmethod
def _make_separator_line(*widths: int) -> str:
"""Create a separator line with given column widths.
Args:
widths: Column widths to create separators for
Returns:
Separator line like "----+---------+-----"
"""
return "-+-".join("-" * width for width in widths)
# Pre-computed separator lines
MAIN_TABLE_SEPARATOR: str = _make_separator_line(
COL_COMPONENT,
COL_FLASH_TEXT,
COL_FLASH_DATA,
COL_RAM_DATA,
COL_RAM_BSS,
COL_TOTAL_FLASH,
COL_TOTAL_RAM,
)
CORE_TABLE_SEPARATOR: str = _make_separator_line(
COL_CORE_SUBCATEGORY,
COL_CORE_SIZE,
COL_CORE_COUNT,
COL_CORE_PERCENT,
)
def _add_section_header(self, lines: list[str], title: str) -> None:
"""Add a section header with title centered between separator lines."""
lines.append("")
lines.append("=" * self.TABLE_WIDTH)
lines.append(title.center(self.TABLE_WIDTH))
lines.append("=" * self.TABLE_WIDTH)
lines.append("")
def _add_top_consumers(
self,
lines: list[str],
title: str,
components: list[tuple[str, ComponentMemory]],
get_size: Callable[[ComponentMemory], int],
total: int,
memory_type: str,
limit: int = 25,
) -> None:
"""Add a formatted list of top memory consumers to the report.
Args:
lines: List of report lines to append the output to.
title: Section title to print before the list.
components: Sequence of (name, ComponentMemory) tuples to analyze.
get_size: Callable that takes a ComponentMemory and returns the
size in bytes to use for ranking and display.
total: Total size in bytes for computing percentage usage.
memory_type: Label for the memory region (e.g., "flash" or "RAM").
limit: Maximum number of components to include in the list.
"""
lines.append("")
lines.append(f"{title}:")
for i, (name, mem) in enumerate(components[:limit]):
size = get_size(mem)
if size > 0:
percentage = (size / total * 100) if total > 0 else 0
lines.append(
f"{i + 1}. {name} ({size:,} B) - {percentage:.1f}% of analyzed {memory_type}"
)
def _format_symbol_with_section(
self, demangled: str, size: int, section: str | None = None
) -> str:
"""Format a symbol entry, optionally adding a RAM section label.
If section is one of the RAM sections (.data or .bss), a label like
" [data]" or " [bss]" is appended. For non-RAM sections or when
section is None, no section label is added.
"""
section_label = ""
if section in RAM_SECTIONS:
section_label = f" [{section[1:]}]" # .data -> [data], .bss -> [bss]
return f"{demangled} ({size:,} B){section_label}"
def generate_report(self, detailed: bool = False) -> str:
"""Generate a formatted memory report."""
components = sorted(
self.components.items(), key=lambda x: x[1].flash_total, reverse=True
)
# Calculate totals
total_flash = sum(c.flash_total for _, c in components)
total_ram = sum(c.ram_total for _, c in components)
# Build report
lines: list[str] = []
lines.append("=" * self.TABLE_WIDTH)
lines.append("Component Memory Analysis".center(self.TABLE_WIDTH))
lines.append("=" * self.TABLE_WIDTH)
lines.append("")
# Main table - fixed column widths
lines.append(
f"{'Component':<{self.COL_COMPONENT}} | {'Flash (text)':>{self.COL_FLASH_TEXT}} | {'Flash (data)':>{self.COL_FLASH_DATA}} | {'RAM (data)':>{self.COL_RAM_DATA}} | {'RAM (bss)':>{self.COL_RAM_BSS}} | {'Total Flash':>{self.COL_TOTAL_FLASH}} | {'Total RAM':>{self.COL_TOTAL_RAM}}"
)
lines.append(self.MAIN_TABLE_SEPARATOR)
for name, mem in components:
if mem.flash_total > 0 or mem.ram_total > 0:
flash_rodata = mem.rodata_size + mem.data_size
lines.append(
f"{name:<{self.COL_COMPONENT}} | {mem.text_size:>{self.COL_FLASH_TEXT - 2},} B | {flash_rodata:>{self.COL_FLASH_DATA - 2},} B | "
f"{mem.data_size:>{self.COL_RAM_DATA - 2},} B | {mem.bss_size:>{self.COL_RAM_BSS - 2},} B | "
f"{mem.flash_total:>{self.COL_TOTAL_FLASH - 2},} B | {mem.ram_total:>{self.COL_TOTAL_RAM - 2},} B"
)
lines.append(self.MAIN_TABLE_SEPARATOR)
lines.append(
f"{'TOTAL':<{self.COL_COMPONENT}} | {' ':>{self.COL_FLASH_TEXT}} | {' ':>{self.COL_FLASH_DATA}} | "
f"{' ':>{self.COL_RAM_DATA}} | {' ':>{self.COL_RAM_BSS}} | "
f"{total_flash:>{self.COL_TOTAL_FLASH - 2},} B | {total_ram:>{self.COL_TOTAL_RAM - 2},} B"
)
# Show unattributed RAM (SDK/framework overhead)
unattributed_bss, unattributed_data, unattributed_total = (
self.get_unattributed_ram()
)
if unattributed_total > 0:
lines.append("")
lines.append(
f"Unattributed RAM: {unattributed_total:,} B (SDK/framework overhead)"
)
if unattributed_bss > 0 and unattributed_data > 0:
lines.append(
f" .bss: {unattributed_bss:,} B | .data: {unattributed_data:,} B"
)
# Show SDK symbol breakdown if available
sdk_by_lib = self.get_sdk_ram_by_library()
if sdk_by_lib:
lines.append("")
lines.append("SDK library breakdown (static symbols not in ELF):")
# Sort libraries by total size
lib_totals = [
(lib, sum(s.size for s in syms), syms)
for lib, syms in sdk_by_lib.items()
]
lib_totals.sort(key=lambda x: x[1], reverse=True)
for lib_name, lib_total, syms in lib_totals:
if lib_total == 0:
continue
lines.append(f" {lib_name}: {lib_total:,} B")
# Show top symbols from this library
for sym in sorted(syms, key=lambda s: s.size, reverse=True)[:3]:
section_label = sym.section.lstrip(".")
# Use demangled name (falls back to original if not demangled)
display_name = sym.demangled or sym.name
if len(display_name) > 50:
display_name = f"{display_name[:47]}..."
lines.append(
f" {sym.size:>6,} B [{section_label}] {display_name}"
)
# Top consumers
self._add_top_consumers(
lines,
"Top Flash Consumers",
components,
lambda m: m.flash_total,
total_flash,
"flash",
)
ram_components = sorted(components, key=lambda x: x[1].ram_total, reverse=True)
self._add_top_consumers(
lines,
"Top RAM Consumers",
ram_components,
lambda m: m.ram_total,
total_ram,
"RAM",
)
# Add ESPHome core detailed analysis if there are core symbols
if self._esphome_core_symbols:
self._add_section_header(lines, f"{_COMPONENT_CORE} Detailed Analysis")
# Group core symbols by subcategory
core_subcategories: dict[str, list[tuple[str, str, int]]] = defaultdict(
list
)
for symbol, demangled, size in self._esphome_core_symbols:
# Categorize based on demangled name patterns
subcategory = self._categorize_esphome_core_symbol(demangled)
core_subcategories[subcategory].append((symbol, demangled, size))
# Sort subcategories by total size
sorted_subcategories = sorted(
[
(name, symbols, sum(s[2] for s in symbols))
for name, symbols in core_subcategories.items()
],
key=lambda x: x[2],
reverse=True,
)
lines.append(
f"{'Subcategory':<{self.COL_CORE_SUBCATEGORY}} | {'Size':>{self.COL_CORE_SIZE}} | "
f"{'Count':>{self.COL_CORE_COUNT}} | {'% of Core':>{self.COL_CORE_PERCENT}}"
)
lines.append(self.CORE_TABLE_SEPARATOR)
core_total = sum(size for _, _, size in self._esphome_core_symbols)
for subcategory, symbols, total_size in sorted_subcategories:
percentage = (total_size / core_total * 100) if core_total > 0 else 0
lines.append(
f"{subcategory:<{self.COL_CORE_SUBCATEGORY}} | {total_size:>{self.COL_CORE_SIZE - 2},} B | "
f"{len(symbols):>{self.COL_CORE_COUNT}} | {percentage:>{self.COL_CORE_PERCENT - 1}.1f}%"
)
# All core symbols above threshold
lines.append("")
sorted_core_symbols = sorted(
self._esphome_core_symbols, key=lambda x: x[2], reverse=True
)
large_core_symbols = [
(symbol, demangled, size)
for symbol, demangled, size in sorted_core_symbols
if size > self.SYMBOL_SIZE_THRESHOLD
]
lines.append(
f"{_COMPONENT_CORE} Symbols > {self.SYMBOL_SIZE_THRESHOLD} B ({len(large_core_symbols)} symbols):"
)
for i, (symbol, demangled, size) in enumerate(large_core_symbols):
# Core symbols only track (symbol, demangled, size) without section info,
# so we don't show section labels here
lines.append(
f"{i + 1}. {self._format_symbol_with_section(demangled, size)}"
)
lines.append("=" * self.TABLE_WIDTH)
# Add detailed analysis for top ESPHome and external components
esphome_components = [
(name, mem)
for name, mem in components
if name.startswith(_COMPONENT_PREFIX_ESPHOME) and name != _COMPONENT_CORE
]
external_components = [
(name, mem)
for name, mem in components
if name.startswith(_COMPONENT_PREFIX_EXTERNAL)
]
top_esphome_components = sorted(
esphome_components, key=lambda x: x[1].flash_total, reverse=True
)[:30]
# Include all external components (they're usually important)
top_external_components = sorted(
external_components, key=lambda x: x[1].flash_total, reverse=True
)
# Check if API component exists and ensure it's included
api_component = None
for name, mem in components:
if name == _COMPONENT_API:
api_component = (name, mem)
break
# Also include wifi_stack and other important system components if they exist
system_components_to_include = [
# Empty list - we've finished debugging symbol categorization
# Add component names here if you need to debug their symbols
]
system_components = [
(name, mem)
for name, mem in components
if name in system_components_to_include
]
# Combine all components to analyze: top ESPHome + all external + API if not already included + system components
components_to_analyze = (
list(top_esphome_components)
+ list(top_external_components)
+ system_components
)
if api_component and api_component not in components_to_analyze:
components_to_analyze.append(api_component)
if components_to_analyze:
for comp_name, comp_mem in components_to_analyze:
if not (comp_symbols := self._component_symbols.get(comp_name, [])):
continue
self._add_section_header(lines, f"{comp_name} Detailed Analysis")
# Sort symbols by size
sorted_symbols = sorted(comp_symbols, key=lambda x: x[2], reverse=True)
lines.append(f"Total symbols: {len(sorted_symbols)}")
lines.append(f"Total size: {comp_mem.flash_total:,} B")
lines.append("")
# Show all symbols above threshold for better visibility
large_symbols = [
(sym, dem, size, sec)
for sym, dem, size, sec in sorted_symbols
if size > self.SYMBOL_SIZE_THRESHOLD
]
lines.append(
f"{comp_name} Symbols > {self.SYMBOL_SIZE_THRESHOLD} B ({len(large_symbols)} symbols):"
)
for i, (symbol, demangled, size, section) in enumerate(large_symbols):
lines.append(
f"{i + 1}. {self._format_symbol_with_section(demangled, size, section)}"
)
lines.append("=" * self.TABLE_WIDTH)
# Detailed RAM analysis by component (at end, before RAM strings analysis)
self._add_section_header(lines, "RAM Symbol Analysis by Component")
# Show top 15 RAM consumers with their large symbols
for name, mem in ram_components[:15]:
if mem.ram_total == 0:
continue
ram_syms = self._ram_symbols.get(name, [])
if not ram_syms:
continue
# Sort by size descending
sorted_ram_syms = sorted(ram_syms, key=lambda x: x[2], reverse=True)
large_ram_syms = [
s for s in sorted_ram_syms if s[2] > self.RAM_SYMBOL_SIZE_THRESHOLD
]
lines.append(f"{name} ({mem.ram_total:,} B total RAM):")
# Show breakdown by section type
data_size = sum(s[2] for s in ram_syms if s[3] == ".data")
bss_size = sum(s[2] for s in ram_syms if s[3] == ".bss")
lines.append(f" .data (initialized): {data_size:,} B")
lines.append(f" .bss (uninitialized): {bss_size:,} B")
if large_ram_syms:
lines.append(
f" Symbols > {self.RAM_SYMBOL_SIZE_THRESHOLD} B ({len(large_ram_syms)}):"
)
for symbol, demangled, size, section in large_ram_syms[:10]:
# Format section label consistently by stripping leading dot
section_label = section.lstrip(".") if section else ""
# Add ellipsis if name is truncated
demangled_display = (
f"{demangled[:70]}..." if len(demangled) > 70 else demangled
)
lines.append(
f" {size:>6,} B [{section_label}] {demangled_display}"
)
if len(large_ram_syms) > 10:
lines.append(f" ... and {len(large_ram_syms) - 10} more")
lines.append("")
lines.append(
"Note: This analysis covers symbols in the ELF file. Some runtime allocations may not be included."
)
lines.append("=" * self.TABLE_WIDTH)
return "\n".join(lines)
def dump_uncategorized_symbols(self, output_file: str | None = None) -> None:
"""Dump uncategorized symbols for analysis."""
# Sort by size descending
sorted_symbols = sorted(
self._uncategorized_symbols, key=lambda x: x[2], reverse=True
)
lines = ["Uncategorized Symbols Analysis", "=" * 80]
lines.append(f"Total uncategorized symbols: {len(sorted_symbols)}")
lines.append(
f"Total uncategorized size: {sum(s[2] for s in sorted_symbols):,} bytes"
)
lines.append("")
lines.append(f"{'Size':>10} | {'Symbol':<60} | Demangled")
lines.append("-" * 10 + "-+-" + "-" * 60 + "-+-" + "-" * 40)
for symbol, demangled, size in sorted_symbols[:100]: # Top 100
demangled_display = (
demangled[:100] if symbol != demangled else "[not demangled]"
)
lines.append(f"{size:>10,} | {symbol[:60]:<60} | {demangled_display}")
if len(sorted_symbols) > 100:
lines.append(f"\n... and {len(sorted_symbols) - 100} more symbols")
content = "\n".join(lines)
if output_file:
with open(output_file, "w", encoding="utf-8") as f:
f.write(content)
else:
print(content)
def analyze_elf(
elf_path: str,
objdump_path: str | None = None,
readelf_path: str | None = None,
detailed: bool = False,
external_components: set[str] | None = None,
) -> str:
"""Analyze an ELF file and return a memory report."""
analyzer = MemoryAnalyzerCLI(
elf_path, objdump_path, readelf_path, external_components
)
analyzer.analyze()
return analyzer.generate_report(detailed)
def main():
"""CLI entrypoint for memory analysis."""
if len(sys.argv) < 2:
print("Usage: python -m esphome.analyze_memory <build_directory>")
print("\nAnalyze memory usage from an ESPHome build directory.")
print("The build directory should contain firmware.elf and idedata will be")
print("loaded from ~/.esphome/.internal/idedata/<device>.json")
print("\nExamples:")
print(" python -m esphome.analyze_memory ~/.esphome/build/my-device")
print(" python -m esphome.analyze_memory .esphome/build/my-device")
print(" python -m esphome.analyze_memory my-device # Short form")
sys.exit(1)
build_dir = sys.argv[1]
# Load build directory
import json
from pathlib import Path
from esphome.platformio_api import IDEData
build_path = Path(build_dir)
# If no path separator in name, assume it's a device name
if "/" not in build_dir and not build_path.is_dir():
# Try current directory first
cwd_path = Path.cwd() / ".esphome" / "build" / build_dir
if cwd_path.is_dir():
build_path = cwd_path
print(f"Using build directory: {build_path}", file=sys.stderr)
else:
# Fall back to home directory
build_path = Path.home() / ".esphome" / "build" / build_dir
print(f"Using build directory: {build_path}", file=sys.stderr)
if not build_path.is_dir():
print(f"Error: {build_path} is not a directory", file=sys.stderr)
sys.exit(1)
# Find firmware.elf
elf_file = None
for elf_candidate in [
build_path / "firmware.elf",
build_path / ".pioenvs" / build_path.name / "firmware.elf",
]:
if elf_candidate.exists():
elf_file = str(elf_candidate)
break
if not elf_file:
print(f"Error: firmware.elf not found in {build_dir}", file=sys.stderr)
sys.exit(1)
# Find idedata.json - check current directory first, then home
device_name = build_path.name
idedata_candidates = [
Path.cwd() / ".esphome" / "idedata" / f"{device_name}.json",
Path.home() / ".esphome" / "idedata" / f"{device_name}.json",
]
idedata = None
for idedata_path in idedata_candidates:
if not idedata_path.exists():
continue
try:
with open(idedata_path, encoding="utf-8") as f:
raw_data = json.load(f)
idedata = IDEData(raw_data)
print(f"Loaded idedata from: {idedata_path}", file=sys.stderr)
break
except (json.JSONDecodeError, OSError) as e:
print(f"Warning: Failed to load idedata: {e}", file=sys.stderr)
if not idedata:
print(
f"Warning: idedata not found (searched {idedata_candidates[0]} and {idedata_candidates[1]})",
file=sys.stderr,
)
analyzer = MemoryAnalyzerCLI(elf_file, idedata=idedata)
analyzer.analyze()
report = analyzer.generate_report()
print(report)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -1,182 +0,0 @@
"""Symbol demangling utilities for memory analysis.
This module provides functions for demangling C++ symbol names using c++filt.
"""
from __future__ import annotations
import logging
import re
import subprocess
from .toolchain import find_tool
_LOGGER = logging.getLogger(__name__)
# GCC global constructor/destructor prefix annotations
GCC_PREFIX_ANNOTATIONS = {
"_GLOBAL__sub_I_": "global constructor for",
"_GLOBAL__sub_D_": "global destructor for",
}
# GCC optimization suffix pattern (e.g., $isra$0, $part$1, $constprop$2)
GCC_OPTIMIZATION_SUFFIX_PATTERN = re.compile(r"(\$(?:isra|part|constprop)\$\d+)")
def _strip_gcc_annotations(symbol: str) -> tuple[str, str]:
"""Strip GCC optimization suffixes and prefixes from a symbol.
Args:
symbol: The mangled symbol name
Returns:
Tuple of (stripped_symbol, removed_prefix)
"""
# Remove GCC optimization markers
stripped = GCC_OPTIMIZATION_SUFFIX_PATTERN.sub("", symbol)
# Handle GCC global constructor/initializer prefixes
prefix = ""
for gcc_prefix in GCC_PREFIX_ANNOTATIONS:
if stripped.startswith(gcc_prefix):
prefix = gcc_prefix
stripped = stripped[len(prefix) :]
break
return stripped, prefix
def _restore_symbol_prefix(prefix: str, stripped: str, demangled: str) -> str:
"""Restore prefix that was removed before demangling.
Args:
prefix: Prefix that was removed (e.g., "_GLOBAL__sub_I_")
stripped: Stripped symbol name
demangled: Demangled symbol name
Returns:
Demangled name with prefix restored/annotated
"""
if not prefix:
return demangled
# Successfully demangled - add descriptive prefix
if demangled != stripped and (annotation := GCC_PREFIX_ANNOTATIONS.get(prefix)):
return f"[{annotation}: {demangled}]"
# Failed to demangle - restore original prefix
return prefix + demangled
def _restore_symbol_suffix(original: str, demangled: str) -> str:
"""Restore GCC optimization suffix that was removed before demangling.
Args:
original: Original symbol name with suffix
demangled: Demangled symbol name without suffix
Returns:
Demangled name with suffix annotation
"""
if suffix_match := GCC_OPTIMIZATION_SUFFIX_PATTERN.search(original):
return f"{demangled} [{suffix_match.group(1)}]"
return demangled
def batch_demangle(
symbols: list[str],
cppfilt_path: str | None = None,
objdump_path: str | None = None,
) -> dict[str, str]:
"""Batch demangle C++ symbol names.
Args:
symbols: List of symbol names to demangle
cppfilt_path: Path to c++filt binary (auto-detected if not provided)
objdump_path: Path to objdump binary to derive c++filt path from
Returns:
Dictionary mapping original symbol names to demangled names
"""
cache: dict[str, str] = {}
if not symbols:
return cache
# Find c++filt tool
cppfilt_cmd = cppfilt_path or find_tool("c++filt", objdump_path)
if not cppfilt_cmd:
_LOGGER.warning("Could not find c++filt, symbols will not be demangled")
return {s: s for s in symbols}
_LOGGER.debug("Demangling %d symbols using %s", len(symbols), cppfilt_cmd)
# Strip GCC optimization suffixes and prefixes before demangling
symbols_stripped: list[str] = []
symbols_prefixes: list[str] = []
for symbol in symbols:
stripped, prefix = _strip_gcc_annotations(symbol)
symbols_stripped.append(stripped)
symbols_prefixes.append(prefix)
try:
result = subprocess.run(
[cppfilt_cmd],
input="\n".join(symbols_stripped),
capture_output=True,
text=True,
check=False,
)
except (subprocess.SubprocessError, OSError, UnicodeDecodeError) as e:
_LOGGER.warning("Failed to batch demangle symbols: %s", e)
return {s: s for s in symbols}
if result.returncode != 0:
_LOGGER.warning(
"c++filt exited with code %d: %s",
result.returncode,
result.stderr[:200] if result.stderr else "(no error output)",
)
return {s: s for s in symbols}
# Process demangled output
demangled_lines = result.stdout.strip().split("\n")
# Check for output length mismatch
if len(demangled_lines) != len(symbols):
_LOGGER.warning(
"c++filt output mismatch: expected %d lines, got %d",
len(symbols),
len(demangled_lines),
)
return {s: s for s in symbols}
failed_count = 0
for original, stripped, prefix, demangled in zip(
symbols, symbols_stripped, symbols_prefixes, demangled_lines
):
# Add back any prefix that was removed
demangled = _restore_symbol_prefix(prefix, stripped, demangled)
# If we stripped a suffix, add it back to the demangled name for clarity
if original != stripped and not prefix:
demangled = _restore_symbol_suffix(original, demangled)
cache[original] = demangled
# Count symbols that failed to demangle
if stripped == demangled and stripped.startswith("_Z"):
failed_count += 1
if failed_count <= 5:
_LOGGER.debug("Failed to demangle: %s", original)
if failed_count > 0:
_LOGGER.debug(
"Failed to demangle %d/%d symbols using %s",
failed_count,
len(symbols),
cppfilt_cmd,
)
return cache

View File

@@ -1,121 +0,0 @@
"""Helper functions for memory analysis."""
from functools import cache
from pathlib import Path
from .const import SECTION_MAPPING
# Import namespace constant from parent module
# Note: This would create a circular import if done at module level,
# so we'll define it locally here as well
_NAMESPACE_ESPHOME = "esphome::"
# Get the list of actual ESPHome components by scanning the components directory
@cache
def get_esphome_components():
"""Get set of actual ESPHome components from the components directory."""
# Find the components directory relative to this file
# Go up two levels from analyze_memory/helpers.py to esphome/
current_dir = Path(__file__).parent.parent
components_dir = current_dir / "components"
if not components_dir.exists() or not components_dir.is_dir():
return frozenset()
return frozenset(
item.name
for item in components_dir.iterdir()
if item.is_dir()
and not item.name.startswith(".")
and not item.name.startswith("__")
)
@cache
def get_component_class_patterns(component_name: str) -> list[str]:
"""Generate component class name patterns for symbol matching.
Args:
component_name: The component name (e.g., "ota", "wifi", "api")
Returns:
List of pattern strings to match against demangled symbols
"""
component_upper = component_name.upper()
component_camel = component_name.replace("_", "").title()
return [
f"{_NAMESPACE_ESPHOME}{component_upper}Component", # e.g., esphome::OTAComponent
f"{_NAMESPACE_ESPHOME}ESPHome{component_upper}Component", # e.g., esphome::ESPHomeOTAComponent
f"{_NAMESPACE_ESPHOME}{component_camel}Component", # e.g., esphome::OtaComponent
f"{_NAMESPACE_ESPHOME}ESPHome{component_camel}Component", # e.g., esphome::ESPHomeOtaComponent
]
def map_section_name(raw_section: str) -> str | None:
"""Map raw section name to standard section.
Args:
raw_section: Raw section name from ELF file (e.g., ".iram0.text", ".rodata.str1.1")
Returns:
Standard section name (".text", ".rodata", ".data", ".bss") or None
"""
for standard_section, patterns in SECTION_MAPPING.items():
if any(pattern in raw_section for pattern in patterns):
return standard_section
return None
def parse_symbol_line(line: str) -> tuple[str, str, int, str] | None:
"""Parse a single symbol line from objdump output.
Args:
line: Line from objdump -t output
Returns:
Tuple of (section, name, size, address) or None if not a valid symbol.
Format: address l/g w/d F/O section size name
Example: 40084870 l F .iram0.text 00000000 _xt_user_exc
"""
parts = line.split()
if len(parts) < 5:
return None
try:
# Validate and extract address
address = parts[0]
int(address, 16)
except ValueError:
return None
# Look for F (function) or O (object) flag
if "F" not in parts and "O" not in parts:
return None
# Find section, size, and name
# Try each part as a potential section name
for i, part in enumerate(parts):
# Skip parts that are clearly flags, addresses, or other metadata
# Sections start with '.' (standard ELF) or are known section names (Zephyr)
section = map_section_name(part)
if not section:
continue
# Need at least size field after section
if i + 1 >= len(parts):
break
try:
size = int(parts[i + 1], 16)
except ValueError:
break
# Need symbol name and non-zero size
if i + 2 >= len(parts) or size == 0:
break
name = " ".join(parts[i + 2 :])
return (section, name, size, address)
return None

View File

@@ -1,493 +0,0 @@
"""Analyzer for RAM-stored strings in ESP8266/ESP32 firmware ELF files.
This module identifies strings that are stored in RAM sections (.data, .bss, .rodata)
rather than in flash sections (.irom0.text, .irom.text), which is important for
memory-constrained platforms like ESP8266.
"""
from __future__ import annotations
from collections import defaultdict
from dataclasses import dataclass
import logging
from pathlib import Path
import re
import subprocess
from .demangle import batch_demangle
from .toolchain import find_tool
_LOGGER = logging.getLogger(__name__)
# ESP8266: .rodata is in RAM (DRAM), not flash
# ESP32: .rodata is in flash, mapped to data bus
ESP8266_RAM_SECTIONS = frozenset([".data", ".rodata", ".bss"])
ESP8266_FLASH_SECTIONS = frozenset([".irom0.text", ".irom.text", ".text"])
# ESP32: .rodata is memory-mapped from flash
ESP32_RAM_SECTIONS = frozenset([".data", ".bss", ".dram0.data", ".dram0.bss"])
ESP32_FLASH_SECTIONS = frozenset([".text", ".rodata", ".flash.text", ".flash.rodata"])
# nm symbol types for data symbols (D=global data, d=local data, R=rodata, B=bss)
DATA_SYMBOL_TYPES = frozenset(["D", "d", "R", "r", "B", "b"])
@dataclass
class SectionInfo:
"""Information about an ELF section."""
name: str
address: int
size: int
@dataclass
class RamString:
"""A string found in RAM."""
section: str
address: int
content: str
@property
def size(self) -> int:
"""Size in bytes including null terminator."""
return len(self.content) + 1
@dataclass
class RamSymbol:
"""A symbol found in RAM."""
name: str
sym_type: str
address: int
size: int
section: str
demangled: str = "" # Demangled name, set after batch demangling
class RamStringsAnalyzer:
"""Analyzes ELF files to find strings stored in RAM."""
def __init__(
self,
elf_path: str,
objdump_path: str | None = None,
min_length: int = 8,
platform: str = "esp32",
) -> None:
"""Initialize the RAM strings analyzer.
Args:
elf_path: Path to the ELF file to analyze
objdump_path: Path to objdump binary (used to find other tools)
min_length: Minimum string length to report (default: 8)
platform: Platform name ("esp8266", "esp32", etc.) for section mapping
"""
self.elf_path = Path(elf_path)
if not self.elf_path.exists():
raise FileNotFoundError(f"ELF file not found: {elf_path}")
self.objdump_path = objdump_path
self.min_length = min_length
self.platform = platform
# Set RAM/flash sections based on platform
if self.platform == "esp8266":
self.ram_sections = ESP8266_RAM_SECTIONS
self.flash_sections = ESP8266_FLASH_SECTIONS
else:
# ESP32 and other platforms
self.ram_sections = ESP32_RAM_SECTIONS
self.flash_sections = ESP32_FLASH_SECTIONS
self.sections: dict[str, SectionInfo] = {}
self.ram_strings: list[RamString] = []
self.ram_symbols: list[RamSymbol] = []
def _run_command(self, cmd: list[str]) -> str:
"""Run a command and return its output."""
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
return result.stdout
except subprocess.CalledProcessError as e:
_LOGGER.debug("Command failed: %s - %s", " ".join(cmd), e.stderr)
raise
except FileNotFoundError:
_LOGGER.warning("Command not found: %s", cmd[0])
raise
def analyze(self) -> None:
"""Perform the full RAM analysis."""
self._parse_sections()
self._extract_strings()
self._analyze_symbols()
self._demangle_symbols()
def _parse_sections(self) -> None:
"""Parse section headers from ELF file."""
objdump = find_tool("objdump", self.objdump_path)
if not objdump:
_LOGGER.error("Could not find objdump command")
return
try:
output = self._run_command([objdump, "-h", str(self.elf_path)])
except (subprocess.CalledProcessError, FileNotFoundError):
return
# Parse section headers
# Format: Idx Name Size VMA LMA File off Algn
section_pattern = re.compile(
r"^\s*\d+\s+(\S+)\s+([0-9a-fA-F]+)\s+([0-9a-fA-F]+)"
)
for line in output.split("\n"):
if match := section_pattern.match(line):
name = match.group(1)
size = int(match.group(2), 16)
vma = int(match.group(3), 16)
self.sections[name] = SectionInfo(name, vma, size)
def _extract_strings(self) -> None:
"""Extract strings from RAM sections."""
objdump = find_tool("objdump", self.objdump_path)
if not objdump:
return
for section_name in self.ram_sections:
if section_name not in self.sections:
continue
try:
output = self._run_command(
[objdump, "-s", "-j", section_name, str(self.elf_path)]
)
except subprocess.CalledProcessError:
# Section may exist but have no content (e.g., .bss)
continue
except FileNotFoundError:
continue
strings = self._parse_hex_dump(output, section_name)
self.ram_strings.extend(strings)
def _parse_hex_dump(self, output: str, section_name: str) -> list[RamString]:
"""Parse hex dump output to extract strings.
Args:
output: Output from objdump -s
section_name: Name of the section being parsed
Returns:
List of RamString objects
"""
strings: list[RamString] = []
current_string = bytearray()
string_start_addr = 0
for line in output.split("\n"):
# Lines look like: " 3ffef8a0 00000000 00000000 00000000 00000000 ................"
match = re.match(r"^\s+([0-9a-fA-F]+)\s+((?:[0-9a-fA-F]{2,8}\s*)+)", line)
if not match:
continue
addr = int(match.group(1), 16)
hex_data = match.group(2).strip()
# Convert hex to bytes
hex_bytes = hex_data.split()
byte_offset = 0
for hex_chunk in hex_bytes:
# Handle both byte-by-byte and word formats
for i in range(0, len(hex_chunk), 2):
byte_val = int(hex_chunk[i : i + 2], 16)
if 0x20 <= byte_val <= 0x7E: # Printable ASCII
if not current_string:
string_start_addr = addr + byte_offset
current_string.append(byte_val)
else:
if byte_val == 0 and len(current_string) >= self.min_length:
# Found null terminator
strings.append(
RamString(
section=section_name,
address=string_start_addr,
content=current_string.decode(
"ascii", errors="ignore"
),
)
)
current_string = bytearray()
byte_offset += 1
return strings
def _analyze_symbols(self) -> None:
"""Analyze symbols in RAM sections."""
nm = find_tool("nm", self.objdump_path)
if not nm:
return
try:
output = self._run_command([nm, "-S", "--size-sort", str(self.elf_path)])
except (subprocess.CalledProcessError, FileNotFoundError):
return
for line in output.split("\n"):
parts = line.split()
if len(parts) < 4:
continue
try:
addr = int(parts[0], 16)
size = int(parts[1], 16) if parts[1] != "?" else 0
except ValueError:
continue
sym_type = parts[2]
name = " ".join(parts[3:])
# Filter for data symbols
if sym_type not in DATA_SYMBOL_TYPES:
continue
# Check if symbol is in a RAM section
for section_name in self.ram_sections:
if section_name not in self.sections:
continue
section = self.sections[section_name]
if section.address <= addr < section.address + section.size:
self.ram_symbols.append(
RamSymbol(
name=name,
sym_type=sym_type,
address=addr,
size=size,
section=section_name,
)
)
break
def _demangle_symbols(self) -> None:
"""Batch demangle all RAM symbol names."""
if not self.ram_symbols:
return
# Collect all symbol names and demangle them
symbol_names = [s.name for s in self.ram_symbols]
demangle_cache = batch_demangle(symbol_names, objdump_path=self.objdump_path)
# Assign demangled names to symbols
for symbol in self.ram_symbols:
symbol.demangled = demangle_cache.get(symbol.name, symbol.name)
def _get_sections_size(self, section_names: frozenset[str]) -> int:
"""Get total size of specified sections."""
return sum(
section.size
for name, section in self.sections.items()
if name in section_names
)
def get_total_ram_usage(self) -> int:
"""Get total RAM usage from RAM sections."""
return self._get_sections_size(self.ram_sections)
def get_total_flash_usage(self) -> int:
"""Get total flash usage from flash sections."""
return self._get_sections_size(self.flash_sections)
def get_total_string_bytes(self) -> int:
"""Get total bytes used by strings in RAM."""
return sum(s.size for s in self.ram_strings)
def get_repeated_strings(self) -> list[tuple[str, int]]:
"""Find strings that appear multiple times.
Returns:
List of (string, count) tuples sorted by potential savings
"""
string_counts: dict[str, int] = defaultdict(int)
for ram_string in self.ram_strings:
string_counts[ram_string.content] += 1
return sorted(
[(s, c) for s, c in string_counts.items() if c > 1],
key=lambda x: x[1] * (len(x[0]) + 1),
reverse=True,
)
def get_long_strings(self, min_len: int = 20) -> list[RamString]:
"""Get strings longer than the specified length.
Args:
min_len: Minimum string length
Returns:
List of RamString objects sorted by length
"""
return sorted(
[s for s in self.ram_strings if len(s.content) >= min_len],
key=lambda x: len(x.content),
reverse=True,
)
def get_largest_symbols(self, min_size: int = 100) -> list[RamSymbol]:
"""Get RAM symbols larger than the specified size.
Args:
min_size: Minimum symbol size in bytes
Returns:
List of RamSymbol objects sorted by size
"""
return sorted(
[s for s in self.ram_symbols if s.size >= min_size],
key=lambda x: x.size,
reverse=True,
)
def generate_report(self, show_all_sections: bool = False) -> str:
"""Generate a formatted RAM strings analysis report.
Args:
show_all_sections: If True, show all sections, not just RAM
Returns:
Formatted report string
"""
lines: list[str] = []
table_width = 80
lines.append("=" * table_width)
lines.append(
f"RAM Strings Analysis ({self.platform.upper()})".center(table_width)
)
lines.append("=" * table_width)
lines.append("")
# Section Analysis
lines.append("SECTION ANALYSIS")
lines.append("-" * table_width)
lines.append(f"{'Section':<20} {'Address':<12} {'Size':<12} {'Location'}")
lines.append("-" * table_width)
total_ram_usage = 0
total_flash_usage = 0
for name, section in sorted(self.sections.items(), key=lambda x: x[1].address):
if name in self.ram_sections:
location = "RAM"
total_ram_usage += section.size
elif name in self.flash_sections:
location = "FLASH"
total_flash_usage += section.size
else:
location = "OTHER"
if show_all_sections or name in self.ram_sections:
lines.append(
f"{name:<20} 0x{section.address:08x} {section.size:>8} B {location}"
)
lines.append("-" * table_width)
lines.append(f"Total RAM sections size: {total_ram_usage:,} bytes")
lines.append(f"Total Flash sections size: {total_flash_usage:,} bytes")
# Strings in RAM
lines.append("")
lines.append("=" * table_width)
lines.append("STRINGS IN RAM SECTIONS")
lines.append("=" * table_width)
lines.append(
"Note: .bss sections contain uninitialized data (no strings to extract)"
)
# Group strings by section
strings_by_section: dict[str, list[RamString]] = defaultdict(list)
for ram_string in self.ram_strings:
strings_by_section[ram_string.section].append(ram_string)
for section_name in sorted(strings_by_section.keys()):
section_strings = strings_by_section[section_name]
lines.append(f"\nSection: {section_name}")
lines.append("-" * 40)
for ram_string in sorted(section_strings, key=lambda x: x.address):
clean_string = ram_string.content[:100] + (
"..." if len(ram_string.content) > 100 else ""
)
lines.append(
f' 0x{ram_string.address:08x}: "{clean_string}" (len={len(ram_string.content)})'
)
# Large RAM symbols
lines.append("")
lines.append("=" * table_width)
lines.append("LARGE DATA SYMBOLS IN RAM (>= 50 bytes)")
lines.append("=" * table_width)
largest_symbols = self.get_largest_symbols(50)
lines.append(f"\n{'Symbol':<50} {'Type':<6} {'Size':<10} {'Section'}")
lines.append("-" * table_width)
for symbol in largest_symbols:
# Use demangled name if available, otherwise raw name
display_name = symbol.demangled or symbol.name
name_display = display_name[:49] if len(display_name) > 49 else display_name
lines.append(
f"{name_display:<50} {symbol.sym_type:<6} {symbol.size:>8} B {symbol.section}"
)
# Summary
lines.append("")
lines.append("=" * table_width)
lines.append("SUMMARY")
lines.append("=" * table_width)
lines.append(f"Total strings found in RAM: {len(self.ram_strings)}")
total_string_bytes = self.get_total_string_bytes()
lines.append(f"Total bytes used by strings: {total_string_bytes:,}")
# Optimization targets
lines.append("")
lines.append("=" * table_width)
lines.append("POTENTIAL OPTIMIZATION TARGETS")
lines.append("=" * table_width)
# Repeated strings
repeated = self.get_repeated_strings()[:10]
if repeated:
lines.append("\nRepeated strings (could be deduplicated):")
for string, count in repeated:
savings = (count - 1) * (len(string) + 1)
clean_string = string[:50] + ("..." if len(string) > 50 else "")
lines.append(
f' "{clean_string}" - appears {count} times (potential savings: {savings} bytes)'
)
# Long strings - platform-specific advice
long_strings = self.get_long_strings(20)[:10]
if long_strings:
if self.platform == "esp8266":
lines.append(
"\nLong strings that could be moved to PROGMEM (>= 20 chars):"
)
else:
# ESP32: strings in DRAM are typically there for a reason
# (interrupt handlers, pre-flash-init code, etc.)
lines.append("\nLong strings in DRAM (>= 20 chars):")
lines.append(
"Note: ESP32 DRAM strings may be required for interrupt/early-boot contexts"
)
for ram_string in long_strings:
clean_string = ram_string.content[:60] + (
"..." if len(ram_string.content) > 60 else ""
)
lines.append(
f' {ram_string.section} @ 0x{ram_string.address:08x}: "{clean_string}" ({len(ram_string.content)} bytes)'
)
lines.append("")
return "\n".join(lines)

View File

@@ -1,173 +0,0 @@
"""Toolchain utilities for memory analysis."""
from __future__ import annotations
import logging
import os
from pathlib import Path
import subprocess
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from collections.abc import Sequence
_LOGGER = logging.getLogger(__name__)
# Platform-specific toolchain prefixes
TOOLCHAIN_PREFIXES = [
"xtensa-lx106-elf-", # ESP8266
"xtensa-esp32-elf-", # ESP32
"xtensa-esp-elf-", # ESP32 (newer IDF)
"arm-zephyr-eabi-", # nRF52/Zephyr SDK
"arm-none-eabi-", # Generic ARM (RP2040, etc.)
"", # System default (no prefix)
]
def _find_in_platformio_packages(tool_name: str) -> str | None:
"""Search for a tool in PlatformIO package directories.
This handles cases like Zephyr SDK where tools are installed in nested
directories that aren't in PATH.
Args:
tool_name: Name of the tool (e.g., "readelf", "objdump")
Returns:
Full path to the tool or None if not found
"""
# Get PlatformIO packages directory
platformio_home = Path(os.path.expanduser("~/.platformio/packages"))
if not platformio_home.exists():
return None
# Search patterns for toolchains that might contain the tool
# Order matters - more specific patterns first
search_patterns = [
# Zephyr SDK deeply nested structure (4 levels)
# e.g., toolchain-gccarmnoneeabi/zephyr-sdk-0.17.4/arm-zephyr-eabi/bin/arm-zephyr-eabi-objdump
f"toolchain-*/*/*/bin/*-{tool_name}",
# Zephyr SDK nested structure (3 levels)
f"toolchain-*/*/bin/*-{tool_name}",
f"toolchain-*/bin/*-{tool_name}",
# Standard PlatformIO toolchain structure
f"toolchain-*/bin/*{tool_name}",
]
for pattern in search_patterns:
matches = list(platformio_home.glob(pattern))
if matches:
# Sort to get consistent results, prefer arm-zephyr-eabi over arm-none-eabi
matches.sort(key=lambda p: ("zephyr" not in str(p), str(p)))
tool_path = str(matches[0])
_LOGGER.debug("Found %s in PlatformIO packages: %s", tool_name, tool_path)
return tool_path
return None
def resolve_tool_path(
tool_name: str,
derived_path: str | None,
objdump_path: str | None = None,
) -> str | None:
"""Resolve a tool path, falling back to find_tool if derived path doesn't exist.
Args:
tool_name: Name of the tool (e.g., "objdump", "readelf")
derived_path: Path derived from idedata (may not exist for some platforms)
objdump_path: Path to objdump binary to derive other tool paths from
Returns:
Resolved path to the tool, or the original derived_path if it exists
"""
if derived_path and not Path(derived_path).exists():
found = find_tool(tool_name, objdump_path)
if found:
_LOGGER.debug(
"Derived %s path %s not found, using %s",
tool_name,
derived_path,
found,
)
return found
return derived_path
def find_tool(
tool_name: str,
objdump_path: str | None = None,
) -> str | None:
"""Find a toolchain tool by name.
First tries to derive the tool path from objdump_path (if provided),
then searches PlatformIO package directories (for cross-compile toolchains),
and finally falls back to searching for platform-specific tools in PATH.
Args:
tool_name: Name of the tool (e.g., "objdump", "nm", "c++filt")
objdump_path: Path to objdump binary to derive other tool paths from
Returns:
Path to the tool or None if not found
"""
# Try to derive from objdump path first (most reliable)
if objdump_path and objdump_path != "objdump":
objdump_file = Path(objdump_path)
# Replace just the filename portion, preserving any prefix (e.g., xtensa-esp32-elf-)
new_name = objdump_file.name.replace("objdump", tool_name)
potential_path = str(objdump_file.with_name(new_name))
if Path(potential_path).exists():
_LOGGER.debug("Found %s at: %s", tool_name, potential_path)
return potential_path
# Search in PlatformIO packages directory first (handles Zephyr SDK, etc.)
# This must come before PATH search because system tools (e.g., /usr/bin/objdump)
# are for the host architecture, not the target (ARM, Xtensa, etc.)
if found := _find_in_platformio_packages(tool_name):
return found
# Try platform-specific tools in PATH (fallback for when tools are installed globally)
for prefix in TOOLCHAIN_PREFIXES:
cmd = f"{prefix}{tool_name}"
try:
subprocess.run([cmd, "--version"], capture_output=True, check=True)
_LOGGER.debug("Found %s: %s", tool_name, cmd)
return cmd
except (subprocess.CalledProcessError, FileNotFoundError):
continue
_LOGGER.warning("Could not find %s tool", tool_name)
return None
def run_tool(
cmd: Sequence[str],
timeout: int = 30,
) -> subprocess.CompletedProcess[str] | None:
"""Run a toolchain command and return the result.
Args:
cmd: Command and arguments to run
timeout: Timeout in seconds
Returns:
CompletedProcess on success, None on failure
"""
try:
return subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=timeout,
check=False,
)
except subprocess.TimeoutExpired:
_LOGGER.warning("Command timed out: %s", " ".join(cmd))
return None
except FileNotFoundError:
_LOGGER.warning("Command not found: %s", cmd[0])
return None
except OSError as e:
_LOGGER.warning("Failed to run command %s: %s", cmd[0], e)
return None

View File

@@ -15,15 +15,7 @@ from esphome.const import (
CONF_TYPE_ID,
CONF_UPDATE_INTERVAL,
)
from esphome.core import ID, Lambda
from esphome.cpp_generator import (
LambdaExpression,
MockObj,
MockObjClass,
TemplateArgsType,
)
from esphome.schema_extractors import SCHEMA_EXTRACT, schema_extractor
from esphome.types import ConfigType
from esphome.util import Registry
@@ -57,11 +49,11 @@ def maybe_conf(conf, *validators):
return validate
def register_action(name: str, action_type: MockObjClass, schema: cv.Schema):
def register_action(name, action_type, schema):
return ACTION_REGISTRY.register(name, action_type, schema)
def register_condition(name: str, condition_type: MockObjClass, schema: cv.Schema):
def register_condition(name, condition_type, schema):
return CONDITION_REGISTRY.register(name, condition_type, schema)
@@ -92,7 +84,6 @@ def validate_potentially_or_condition(value):
DelayAction = cg.esphome_ns.class_("DelayAction", Action, cg.Component)
LambdaAction = cg.esphome_ns.class_("LambdaAction", Action)
StatelessLambdaAction = cg.esphome_ns.class_("StatelessLambdaAction", Action)
IfAction = cg.esphome_ns.class_("IfAction", Action)
WhileAction = cg.esphome_ns.class_("WhileAction", Action)
RepeatAction = cg.esphome_ns.class_("RepeatAction", Action)
@@ -103,40 +94,9 @@ ResumeComponentAction = cg.esphome_ns.class_("ResumeComponentAction", Action)
Automation = cg.esphome_ns.class_("Automation")
LambdaCondition = cg.esphome_ns.class_("LambdaCondition", Condition)
StatelessLambdaCondition = cg.esphome_ns.class_("StatelessLambdaCondition", Condition)
ForCondition = cg.esphome_ns.class_("ForCondition", Condition, cg.Component)
def new_lambda_pvariable(
id_obj: ID,
lambda_expr: LambdaExpression,
stateless_class: MockObjClass,
template_arg: cg.TemplateArguments | None = None,
) -> MockObj:
"""Create Pvariable for lambda, using stateless class if applicable.
Combines ID selection and Pvariable creation in one call. For stateless
lambdas (empty capture), uses function pointer instead of std::function.
Args:
id_obj: The ID object (action_id, condition_id, or filter_id)
lambda_expr: The lambda expression object
stateless_class: The stateless class to use for stateless lambdas
template_arg: Optional template arguments (for actions/conditions)
Returns:
The created Pvariable
"""
# For stateless lambdas, use function pointer instead of std::function
if lambda_expr.capture == "":
id_obj = id_obj.copy()
id_obj.type = stateless_class
if template_arg is not None:
return cg.new_Pvariable(id_obj, template_arg, lambda_expr)
return cg.new_Pvariable(id_obj, lambda_expr)
def validate_automation(extra_schema=None, extra_validators=None, single=False):
if extra_schema is None:
extra_schema = {}
@@ -182,7 +142,7 @@ def validate_automation(extra_schema=None, extra_validators=None, single=False):
value = cv.Schema([extra_validators])(value)
if single:
if len(value) != 1:
raise cv.Invalid("This trigger allows only a single automation")
raise cv.Invalid("Cannot have more than 1 automation for templates")
return value[0]
return value
@@ -204,82 +164,45 @@ XorCondition = cg.esphome_ns.class_("XorCondition", Condition)
@register_condition("and", AndCondition, validate_condition_list)
async def and_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def and_condition_to_code(config, condition_id, template_arg, args):
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
@register_condition("or", OrCondition, validate_condition_list)
async def or_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def or_condition_to_code(config, condition_id, template_arg, args):
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
@register_condition("all", AndCondition, validate_condition_list)
async def all_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def all_condition_to_code(config, condition_id, template_arg, args):
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
@register_condition("any", OrCondition, validate_condition_list)
async def any_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def any_condition_to_code(config, condition_id, template_arg, args):
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
@register_condition("not", NotCondition, validate_potentially_and_condition)
async def not_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def not_condition_to_code(config, condition_id, template_arg, args):
condition = await build_condition(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, condition)
@register_condition("xor", XorCondition, validate_condition_list)
async def xor_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def xor_condition_to_code(config, condition_id, template_arg, args):
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
@register_condition("lambda", LambdaCondition, cv.returning_lambda)
async def lambda_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def lambda_condition_to_code(config, condition_id, template_arg, args):
lambda_ = await cg.process_lambda(config, args, return_type=bool)
return new_lambda_pvariable(
condition_id, lambda_, StatelessLambdaCondition, template_arg
)
return cg.new_Pvariable(condition_id, template_arg, lambda_)
@register_condition(
@@ -294,12 +217,7 @@ async def lambda_condition_to_code(
}
).extend(cv.COMPONENT_SCHEMA),
)
async def for_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def for_condition_to_code(config, condition_id, template_arg, args):
condition = await build_condition(
config[CONF_CONDITION], cg.TemplateArguments(), []
)
@@ -310,39 +228,10 @@ async def for_condition_to_code(
return var
@register_condition(
"component.is_idle",
LambdaCondition,
maybe_simple_id(
{
cv.Required(CONF_ID): cv.use_id(cg.Component),
}
),
)
async def component_is_idle_condition_to_code(
config: ConfigType,
condition_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
comp = await cg.get_variable(config[CONF_ID])
lambda_ = await cg.process_lambda(
Lambda(f"return {comp}->is_idle();"), args, return_type=bool
)
return new_lambda_pvariable(
condition_id, lambda_, StatelessLambdaCondition, template_arg
)
@register_action(
"delay", DelayAction, cv.templatable(cv.positive_time_period_milliseconds)
)
async def delay_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def delay_action_to_code(config, action_id, template_arg, args):
var = cg.new_Pvariable(action_id, template_arg)
await cg.register_component(var, {})
template_ = await cg.templatable(config, args, cg.uint32)
@@ -367,15 +256,10 @@ async def delay_action_to_code(
cv.has_at_least_one_key(CONF_CONDITION, CONF_ANY, CONF_ALL),
),
)
async def if_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def if_action_to_code(config, action_id, template_arg, args):
cond_conf = next(el for el in config if el in (CONF_ANY, CONF_ALL, CONF_CONDITION))
condition = await build_condition(config[cond_conf], template_arg, args)
var = cg.new_Pvariable(action_id, template_arg, condition)
conditions = await build_condition(config[cond_conf], template_arg, args)
var = cg.new_Pvariable(action_id, template_arg, conditions)
if CONF_THEN in config:
actions = await build_action_list(config[CONF_THEN], template_arg, args)
cg.add(var.add_then(actions))
@@ -395,14 +279,9 @@ async def if_action_to_code(
}
),
)
async def while_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
condition = await build_condition(config[CONF_CONDITION], template_arg, args)
var = cg.new_Pvariable(action_id, template_arg, condition)
async def while_action_to_code(config, action_id, template_arg, args):
conditions = await build_condition(config[CONF_CONDITION], template_arg, args)
var = cg.new_Pvariable(action_id, template_arg, conditions)
actions = await build_action_list(config[CONF_THEN], template_arg, args)
cg.add(var.add_then(actions))
return var
@@ -418,12 +297,7 @@ async def while_action_to_code(
}
),
)
async def repeat_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def repeat_action_to_code(config, action_id, template_arg, args):
var = cg.new_Pvariable(action_id, template_arg)
count_template = await cg.templatable(config[CONF_COUNT], args, cg.uint32)
cg.add(var.set_count(count_template))
@@ -446,14 +320,9 @@ _validate_wait_until = cv.maybe_simple_value(
@register_action("wait_until", WaitUntilAction, _validate_wait_until)
async def wait_until_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
condition = await build_condition(config[CONF_CONDITION], template_arg, args)
var = cg.new_Pvariable(action_id, template_arg, condition)
async def wait_until_action_to_code(config, action_id, template_arg, args):
conditions = await build_condition(config[CONF_CONDITION], template_arg, args)
var = cg.new_Pvariable(action_id, template_arg, conditions)
if CONF_TIMEOUT in config:
template_ = await cg.templatable(config[CONF_TIMEOUT], args, cg.uint32)
cg.add(var.set_timeout_value(template_))
@@ -462,14 +331,9 @@ async def wait_until_action_to_code(
@register_action("lambda", LambdaAction, cv.lambda_)
async def lambda_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def lambda_action_to_code(config, action_id, template_arg, args):
lambda_ = await cg.process_lambda(config, args, return_type=cg.void)
return new_lambda_pvariable(action_id, lambda_, StatelessLambdaAction, template_arg)
return cg.new_Pvariable(action_id, template_arg, lambda_)
@register_action(
@@ -481,12 +345,7 @@ async def lambda_action_to_code(
}
),
)
async def component_update_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def component_update_action_to_code(config, action_id, template_arg, args):
comp = await cg.get_variable(config[CONF_ID])
return cg.new_Pvariable(action_id, template_arg, comp)
@@ -500,12 +359,7 @@ async def component_update_action_to_code(
}
),
)
async def component_suspend_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def component_suspend_action_to_code(config, action_id, template_arg, args):
comp = await cg.get_variable(config[CONF_ID])
return cg.new_Pvariable(action_id, template_arg, comp)
@@ -522,12 +376,7 @@ async def component_suspend_action_to_code(
}
),
)
async def component_resume_action_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
async def component_resume_action_to_code(config, action_id, template_arg, args):
comp = await cg.get_variable(config[CONF_ID])
var = cg.new_Pvariable(action_id, template_arg, comp)
if CONF_UPDATE_INTERVAL in config:
@@ -536,51 +385,43 @@ async def component_resume_action_to_code(
return var
async def build_action(
full_config: ConfigType, template_arg: cg.TemplateArguments, args: TemplateArgsType
) -> MockObj:
async def build_action(full_config, template_arg, args):
registry_entry, config = cg.extract_registry_entry_config(
ACTION_REGISTRY, full_config
)
action_id = full_config[CONF_TYPE_ID]
builder = registry_entry.coroutine_fun
return await builder(config, action_id, template_arg, args)
ret = await builder(config, action_id, template_arg, args)
return ret
async def build_action_list(
config: list[ConfigType], templ: cg.TemplateArguments, arg_type: TemplateArgsType
) -> list[MockObj]:
actions: list[MockObj] = []
async def build_action_list(config, templ, arg_type):
actions = []
for conf in config:
action = await build_action(conf, templ, arg_type)
actions.append(action)
return actions
async def build_condition(
full_config: ConfigType, template_arg: cg.TemplateArguments, args: TemplateArgsType
) -> MockObj:
async def build_condition(full_config, template_arg, args):
registry_entry, config = cg.extract_registry_entry_config(
CONDITION_REGISTRY, full_config
)
action_id = full_config[CONF_TYPE_ID]
builder = registry_entry.coroutine_fun
return await builder(config, action_id, template_arg, args)
ret = await builder(config, action_id, template_arg, args)
return ret
async def build_condition_list(
config: ConfigType, templ: cg.TemplateArguments, args: TemplateArgsType
) -> list[MockObj]:
conditions: list[MockObj] = []
async def build_condition_list(config, templ, args):
conditions = []
for conf in config:
condition = await build_condition(conf, templ, args)
conditions.append(condition)
return conditions
async def build_automation(
trigger: MockObj, args: TemplateArgsType, config: ConfigType
) -> MockObj:
async def build_automation(trigger, args, config):
arg_types = [arg[0] for arg in args]
templ = cg.TemplateArguments(*arg_types)
obj = cg.new_Pvariable(config[CONF_AUTOMATION_ID], templ, trigger)

View File

@@ -1,3 +1,5 @@
import os
from esphome.const import __version__
from esphome.core import CORE
from esphome.helpers import mkdir_p, read_file, write_file_if_changed
@@ -61,7 +63,7 @@ def write_ini(content):
update_storage_json()
path = CORE.relative_build_path("platformio.ini")
if path.is_file():
if os.path.isfile(path):
text = read_file(path)
content_format = find_begin_end(
text, INI_AUTO_GENERATE_BEGIN, INI_AUTO_GENERATE_END

View File

@@ -12,7 +12,6 @@ from esphome.cpp_generator import ( # noqa: F401
ArrayInitializer,
Expression,
LineComment,
LogStringLiteral,
MockObj,
MockObjClass,
Pvariable,
@@ -62,7 +61,6 @@ from esphome.cpp_types import ( # noqa: F401
EntityBase,
EntityCategory,
ESPTime,
FixedVector,
GPIOPin,
InternalGPIOPin,
JsonObject,

View File

@@ -30,9 +30,7 @@ void A01nyubComponent::check_buffer_() {
ESP_LOGV(TAG, "Distance from sensor: %f mm, %f m", distance, meters);
this->publish_state(meters);
} else {
char hex_buf[format_hex_pretty_size(4)];
ESP_LOGW(TAG, "Invalid data read from sensor: %s",
format_hex_pretty_to(hex_buf, this->buffer_.data(), this->buffer_.size()));
ESP_LOGW(TAG, "Invalid data read from sensor: %s", format_hex_pretty(this->buffer_).c_str());
}
} else {
ESP_LOGW(TAG, "checksum failed: %02x != %02x", checksum, this->buffer_[3]);

View File

@@ -29,9 +29,7 @@ void A02yyuwComponent::check_buffer_() {
ESP_LOGV(TAG, "Distance from sensor: %f mm", distance);
this->publish_state(distance);
} else {
char hex_buf[format_hex_pretty_size(4)];
ESP_LOGW(TAG, "Invalid data read from sensor: %s",
format_hex_pretty_to(hex_buf, this->buffer_.data(), this->buffer_.size()));
ESP_LOGW(TAG, "Invalid data read from sensor: %s", format_hex_pretty(this->buffer_).c_str());
}
} else {
ESP_LOGW(TAG, "checksum failed: %02x != %02x", checksum, this->buffer_[3]);

View File

@@ -61,10 +61,11 @@ void AbsoluteHumidityComponent::loop() {
ESP_LOGW(TAG, "No valid state from temperature sensor!");
}
if (no_humidity) {
ESP_LOGW(TAG, "No valid state from humidity sensor!");
ESP_LOGW(TAG, "No valid state from temperature sensor!");
}
ESP_LOGW(TAG, "Unable to calculate absolute humidity.");
this->publish_state(NAN);
this->status_set_warning(LOG_STR("Unable to calculate absolute humidity."));
this->status_set_warning();
return;
}
@@ -86,20 +87,18 @@ void AbsoluteHumidityComponent::loop() {
es = es_wobus(temperature_c);
break;
default:
ESP_LOGE(TAG, "Invalid saturation vapor pressure equation selection!");
this->publish_state(NAN);
this->status_set_error(LOG_STR("Invalid saturation vapor pressure equation selection!"));
this->status_set_error();
return;
}
ESP_LOGD(TAG, "Saturation vapor pressure %f kPa", es);
// Calculate absolute humidity
const float absolute_humidity = vapor_density(es, hr, temperature_k);
ESP_LOGD(TAG,
"Saturation vapor pressure %f kPa\n"
"Publishing absolute humidity %f g/m³",
es, absolute_humidity);
// Publish absolute humidity
ESP_LOGD(TAG, "Publishing absolute humidity %f g/m³", absolute_humidity);
this->status_clear_warning();
this->publish_state(absolute_humidity);
}
@@ -166,7 +165,7 @@ float AbsoluteHumidityComponent::es_wobus(float t) {
}
// From https://www.environmentalbiophysics.org/chalk-talk-how-to-calculate-absolute-humidity/
// H/T to https://esphome.io/cookbook/bme280_environment/
// H/T to https://esphome.io/cookbook/bme280_environment.html
// H/T to https://carnotcycle.wordpress.com/2012/08/04/how-to-convert-relative-humidity-to-absolute-humidity/
float AbsoluteHumidityComponent::vapor_density(float es, float hr, float ta) {
// es = saturated vapor pressure (kPa)

View File

@@ -5,7 +5,7 @@ from esphome.const import (
CONF_EQUATION,
CONF_HUMIDITY,
CONF_TEMPERATURE,
DEVICE_CLASS_ABSOLUTE_HUMIDITY,
ICON_WATER,
STATE_CLASS_MEASUREMENT,
UNIT_GRAMS_PER_CUBIC_METER,
)
@@ -27,8 +27,8 @@ EQUATION = {
CONFIG_SCHEMA = (
sensor.sensor_schema(
unit_of_measurement=UNIT_GRAMS_PER_CUBIC_METER,
icon=ICON_WATER,
accuracy_decimals=2,
device_class=DEVICE_CLASS_ABSOLUTE_HUMIDITY,
state_class=STATE_CLASS_MEASUREMENT,
)
.extend(

View File

@@ -1,3 +1,5 @@
#ifdef USE_ARDUINO
#include "ac_dimmer.h"
#include "esphome/core/helpers.h"
#include "esphome/core/log.h"
@@ -7,12 +9,12 @@
#ifdef USE_ESP8266
#include <core_esp8266_waveform.h>
#endif
#ifdef USE_ESP32
#include "hw_timer_esp_idf.h"
#ifdef USE_ESP32_FRAMEWORK_ARDUINO
#include <esp32-hal-timer.h>
#endif
namespace esphome::ac_dimmer {
namespace esphome {
namespace ac_dimmer {
static const char *const TAG = "ac_dimmer";
@@ -25,14 +27,7 @@ static AcDimmerDataStore *all_dimmers[32]; // NOLINT(cppcoreguidelines-avoid-no
/// However other factors like gate driver propagation time
/// are also considered and a really low value is not important
/// See also: https://github.com/esphome/issues/issues/1632
static constexpr uint32_t GATE_ENABLE_TIME = 50;
#ifdef USE_ESP32
/// Timer frequency in Hz (1 MHz = 1µs resolution)
static constexpr uint32_t TIMER_FREQUENCY_HZ = 1000000;
/// Timer interrupt interval in microseconds
static constexpr uint64_t TIMER_INTERVAL_US = 50;
#endif
static const uint32_t GATE_ENABLE_TIME = 50;
/// Function called from timer interrupt
/// Input is current time in microseconds (micros())
@@ -159,7 +154,7 @@ void IRAM_ATTR HOT AcDimmerDataStore::s_gpio_intr(AcDimmerDataStore *store) {
#ifdef USE_ESP32
// ESP32 implementation, uses basically the same code but needs to wrap
// timer_interrupt() function to auto-reschedule
static HWTimer *dimmer_timer = nullptr; // NOLINT(cppcoreguidelines-avoid-non-const-global-variables)
static hw_timer_t *dimmer_timer = nullptr; // NOLINT(cppcoreguidelines-avoid-non-const-global-variables)
void IRAM_ATTR HOT AcDimmerDataStore::s_timer_intr() { timer_interrupt(); }
#endif
@@ -199,15 +194,15 @@ void AcDimmer::setup() {
setTimer1Callback(&timer_interrupt);
#endif
#ifdef USE_ESP32
dimmer_timer = timer_begin(TIMER_FREQUENCY_HZ);
timer_attach_interrupt(dimmer_timer, &AcDimmerDataStore::s_timer_intr);
// timer frequency of 1mhz
dimmer_timer = timerBegin(1000000);
timerAttachInterrupt(dimmer_timer, &AcDimmerDataStore::s_timer_intr);
// For ESP32, we can't use dynamic interval calculation because the timerX functions
// are not callable from ISR (placed in flash storage).
// Here we just use an interrupt firing every 50 µs.
timer_alarm(dimmer_timer, TIMER_INTERVAL_US, true, 0);
timerAlarm(dimmer_timer, 50, true, 0);
#endif
}
void AcDimmer::write_state(float state) {
state = std::acos(1 - (2 * state)) / std::numbers::pi; // RMS power compensation
auto new_value = static_cast<uint16_t>(roundf(state * 65535));
@@ -215,15 +210,14 @@ void AcDimmer::write_state(float state) {
this->store_.init_cycle = this->init_with_half_cycle_;
this->store_.value = new_value;
}
void AcDimmer::dump_config() {
ESP_LOGCONFIG(TAG, "AcDimmer:");
LOG_PIN(" Output Pin: ", this->gate_pin_);
LOG_PIN(" Zero-Cross Pin: ", this->zero_cross_pin_);
ESP_LOGCONFIG(TAG,
"AcDimmer:\n"
" Min Power: %.1f%%\n"
" Init with half cycle: %s",
this->store_.min_power / 10.0f, YESNO(this->init_with_half_cycle_));
LOG_PIN(" Output Pin: ", this->gate_pin_);
LOG_PIN(" Zero-Cross Pin: ", this->zero_cross_pin_);
if (method_ == DIM_METHOD_LEADING_PULSE) {
ESP_LOGCONFIG(TAG, " Method: leading pulse");
} else if (method_ == DIM_METHOD_LEADING) {
@@ -236,4 +230,7 @@ void AcDimmer::dump_config() {
ESP_LOGV(TAG, " Estimated Frequency: %.3fHz", 1e6f / this->store_.cycle_time_us / 2);
}
} // namespace esphome::ac_dimmer
} // namespace ac_dimmer
} // namespace esphome
#endif // USE_ARDUINO

View File

@@ -1,10 +1,13 @@
#pragma once
#ifdef USE_ARDUINO
#include "esphome/core/component.h"
#include "esphome/core/hal.h"
#include "esphome/components/output/float_output.h"
namespace esphome::ac_dimmer {
namespace esphome {
namespace ac_dimmer {
enum DimMethod { DIM_METHOD_LEADING_PULSE = 0, DIM_METHOD_LEADING, DIM_METHOD_TRAILING };
@@ -61,4 +64,7 @@ class AcDimmer : public output::FloatOutput, public Component {
DimMethod method_;
};
} // namespace esphome::ac_dimmer
} // namespace ac_dimmer
} // namespace esphome
#endif // USE_ARDUINO

View File

@@ -1,152 +0,0 @@
#ifdef USE_ESP32
#include "hw_timer_esp_idf.h"
#include "freertos/FreeRTOS.h"
#include "esphome/core/log.h"
#include "driver/gptimer.h"
#include "esp_clk_tree.h"
#include "soc/clk_tree_defs.h"
static const char *const TAG = "hw_timer_esp_idf";
namespace esphome::ac_dimmer {
// GPTimer divider constraints from ESP-IDF documentation
static constexpr uint32_t GPTIMER_DIVIDER_MIN = 2;
static constexpr uint32_t GPTIMER_DIVIDER_MAX = 65536;
using voidFuncPtr = void (*)();
using voidFuncPtrArg = void (*)(void *);
struct InterruptConfigT {
voidFuncPtr fn{nullptr};
void *arg{nullptr};
};
struct HWTimer {
gptimer_handle_t timer_handle{nullptr};
InterruptConfigT interrupt_handle{};
bool timer_started{false};
};
HWTimer *timer_begin(uint32_t frequency) {
esp_err_t err = ESP_OK;
uint32_t counter_src_hz = 0;
uint32_t divider = 0;
soc_module_clk_t clk;
for (auto clk_candidate : SOC_GPTIMER_CLKS) {
clk = clk_candidate;
esp_clk_tree_src_get_freq_hz(clk, ESP_CLK_TREE_SRC_FREQ_PRECISION_CACHED, &counter_src_hz);
divider = counter_src_hz / frequency;
if ((divider >= GPTIMER_DIVIDER_MIN) && (divider <= GPTIMER_DIVIDER_MAX)) {
break;
} else {
divider = 0;
}
}
if (divider == 0) {
ESP_LOGE(TAG, "Resolution not possible; aborting");
return nullptr;
}
gptimer_config_t config = {
.clk_src = static_cast<gptimer_clock_source_t>(clk),
.direction = GPTIMER_COUNT_UP,
.resolution_hz = frequency,
.flags = {.intr_shared = true},
};
HWTimer *timer = new HWTimer();
err = gptimer_new_timer(&config, &timer->timer_handle);
if (err != ESP_OK) {
ESP_LOGE(TAG, "GPTimer creation failed; error %d", err);
delete timer;
return nullptr;
}
err = gptimer_enable(timer->timer_handle);
if (err != ESP_OK) {
ESP_LOGE(TAG, "GPTimer enable failed; error %d", err);
gptimer_del_timer(timer->timer_handle);
delete timer;
return nullptr;
}
err = gptimer_start(timer->timer_handle);
if (err != ESP_OK) {
ESP_LOGE(TAG, "GPTimer start failed; error %d", err);
gptimer_disable(timer->timer_handle);
gptimer_del_timer(timer->timer_handle);
delete timer;
return nullptr;
}
timer->timer_started = true;
return timer;
}
bool IRAM_ATTR timer_fn_wrapper(gptimer_handle_t timer, const gptimer_alarm_event_data_t *edata, void *args) {
auto *isr = static_cast<InterruptConfigT *>(args);
if (isr->fn) {
if (isr->arg) {
reinterpret_cast<voidFuncPtrArg>(isr->fn)(isr->arg);
} else {
isr->fn();
}
}
// Return false to indicate that no higher-priority task was woken and no context switch is requested.
return false;
}
static void timer_attach_interrupt_functional_arg(HWTimer *timer, void (*user_func)(void *), void *arg) {
if (timer == nullptr) {
ESP_LOGE(TAG, "Timer handle is nullptr");
return;
}
gptimer_event_callbacks_t cbs = {
.on_alarm = timer_fn_wrapper,
};
timer->interrupt_handle.fn = reinterpret_cast<voidFuncPtr>(user_func);
timer->interrupt_handle.arg = arg;
if (timer->timer_started) {
gptimer_stop(timer->timer_handle);
}
gptimer_disable(timer->timer_handle);
esp_err_t err = gptimer_register_event_callbacks(timer->timer_handle, &cbs, &timer->interrupt_handle);
if (err != ESP_OK) {
ESP_LOGE(TAG, "Timer Attach Interrupt failed; error %d", err);
}
gptimer_enable(timer->timer_handle);
if (timer->timer_started) {
gptimer_start(timer->timer_handle);
}
}
void timer_attach_interrupt(HWTimer *timer, voidFuncPtr user_func) {
timer_attach_interrupt_functional_arg(timer, reinterpret_cast<voidFuncPtrArg>(user_func), nullptr);
}
void timer_alarm(HWTimer *timer, uint64_t alarm_value, bool autoreload, uint64_t reload_count) {
if (timer == nullptr) {
ESP_LOGE(TAG, "Timer handle is nullptr");
return;
}
gptimer_alarm_config_t alarm_cfg = {
.alarm_count = alarm_value,
.reload_count = reload_count,
.flags = {.auto_reload_on_alarm = autoreload},
};
esp_err_t err = gptimer_set_alarm_action(timer->timer_handle, &alarm_cfg);
if (err != ESP_OK) {
ESP_LOGE(TAG, "Timer Alarm Write failed; error %d", err);
}
}
} // namespace esphome::ac_dimmer
#endif

View File

@@ -1,17 +0,0 @@
#pragma once
#ifdef USE_ESP32
#include "driver/gptimer_types.h"
namespace esphome::ac_dimmer {
struct HWTimer;
HWTimer *timer_begin(uint32_t frequency);
void timer_attach_interrupt(HWTimer *timer, void (*user_func)());
void timer_alarm(HWTimer *timer, uint64_t alarm_value, bool autoreload, uint64_t reload_count);
} // namespace esphome::ac_dimmer
#endif

View File

@@ -3,7 +3,6 @@ import esphome.codegen as cg
from esphome.components import output
import esphome.config_validation as cv
from esphome.const import CONF_ID, CONF_METHOD, CONF_MIN_POWER
from esphome.core import CORE
CODEOWNERS = ["@glmnet"]
@@ -32,16 +31,11 @@ CONFIG_SCHEMA = cv.All(
),
}
).extend(cv.COMPONENT_SCHEMA),
cv.only_with_arduino,
)
async def to_code(config):
if CORE.is_esp8266:
# ac_dimmer uses setTimer1Callback which requires the waveform generator
from esphome.components.esp8266.const import require_waveform
require_waveform()
var = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(var, config)

View File

@@ -9,7 +9,7 @@ static const char *const TAG = "adalight_light_effect";
static const uint32_t ADALIGHT_ACK_INTERVAL = 1000;
static const uint32_t ADALIGHT_RECEIVE_TIMEOUT = 1000;
AdalightLightEffect::AdalightLightEffect(const char *name) : AddressableLightEffect(name) {}
AdalightLightEffect::AdalightLightEffect(const std::string &name) : AddressableLightEffect(name) {}
void AdalightLightEffect::start() {
AddressableLightEffect::start();

View File

@@ -11,7 +11,7 @@ namespace adalight {
class AdalightLightEffect : public light::AddressableLightEffect, public uart::UARTDevice {
public:
AdalightLightEffect(const char *name);
AdalightLightEffect(const std::string &name);
void start() override;
void stop() override;

View File

@@ -1,20 +1,25 @@
from esphome import pins
import esphome.codegen as cg
from esphome.components.esp32 import (
from esphome.components.esp32 import VARIANT_ESP32P4, get_esp32_variant
from esphome.components.esp32.const import (
VARIANT_ESP32,
VARIANT_ESP32C2,
VARIANT_ESP32C3,
VARIANT_ESP32C5,
VARIANT_ESP32C6,
VARIANT_ESP32C61,
VARIANT_ESP32H2,
VARIANT_ESP32P4,
VARIANT_ESP32S2,
VARIANT_ESP32S3,
get_esp32_variant,
)
from esphome.config_helpers import filter_source_files_from_platform
import esphome.config_validation as cv
from esphome.const import CONF_ANALOG, CONF_INPUT, CONF_NUMBER, PLATFORM_ESP8266
from esphome.const import (
CONF_ANALOG,
CONF_INPUT,
CONF_NUMBER,
PLATFORM_ESP8266,
PlatformFramework,
)
from esphome.core import CORE
CODEOWNERS = ["@esphome/core"]
@@ -101,13 +106,6 @@ ESP32_VARIANT_ADC1_PIN_TO_CHANNEL = {
5: adc_channel_t.ADC_CHANNEL_5,
6: adc_channel_t.ADC_CHANNEL_6,
},
# https://docs.espressif.com/projects/esp-idf/en/latest/esp32c61/api-reference/peripherals/gpio.html
VARIANT_ESP32C61: {
1: adc_channel_t.ADC_CHANNEL_0,
3: adc_channel_t.ADC_CHANNEL_1,
4: adc_channel_t.ADC_CHANNEL_2,
5: adc_channel_t.ADC_CHANNEL_3,
},
# https://github.com/espressif/esp-idf/blob/master/components/soc/esp32h2/include/soc/adc_channel.h
VARIANT_ESP32H2: {
1: adc_channel_t.ADC_CHANNEL_0,
@@ -116,17 +114,6 @@ ESP32_VARIANT_ADC1_PIN_TO_CHANNEL = {
4: adc_channel_t.ADC_CHANNEL_3,
5: adc_channel_t.ADC_CHANNEL_4,
},
# https://github.com/espressif/esp-idf/blob/master/components/soc/esp32p4/include/soc/adc_channel.h
VARIANT_ESP32P4: {
16: adc_channel_t.ADC_CHANNEL_0,
17: adc_channel_t.ADC_CHANNEL_1,
18: adc_channel_t.ADC_CHANNEL_2,
19: adc_channel_t.ADC_CHANNEL_3,
20: adc_channel_t.ADC_CHANNEL_4,
21: adc_channel_t.ADC_CHANNEL_5,
22: adc_channel_t.ADC_CHANNEL_6,
23: adc_channel_t.ADC_CHANNEL_7,
},
# https://github.com/espressif/esp-idf/blob/master/components/soc/esp32s2/include/soc/adc_channel.h
VARIANT_ESP32S2: {
1: adc_channel_t.ADC_CHANNEL_0,
@@ -153,6 +140,16 @@ ESP32_VARIANT_ADC1_PIN_TO_CHANNEL = {
9: adc_channel_t.ADC_CHANNEL_8,
10: adc_channel_t.ADC_CHANNEL_9,
},
VARIANT_ESP32P4: {
16: adc_channel_t.ADC_CHANNEL_0,
17: adc_channel_t.ADC_CHANNEL_1,
18: adc_channel_t.ADC_CHANNEL_2,
19: adc_channel_t.ADC_CHANNEL_3,
20: adc_channel_t.ADC_CHANNEL_4,
21: adc_channel_t.ADC_CHANNEL_5,
22: adc_channel_t.ADC_CHANNEL_6,
23: adc_channel_t.ADC_CHANNEL_7,
},
}
# pin to adc2 channel mapping
@@ -183,19 +180,8 @@ ESP32_VARIANT_ADC2_PIN_TO_CHANNEL = {
VARIANT_ESP32C5: {}, # no ADC2
# https://github.com/espressif/esp-idf/blob/master/components/soc/esp32c6/include/soc/adc_channel.h
VARIANT_ESP32C6: {}, # no ADC2
# ESP32-C61 has no ADC2
VARIANT_ESP32C61: {}, # no ADC2
# https://github.com/espressif/esp-idf/blob/master/components/soc/esp32h2/include/soc/adc_channel.h
VARIANT_ESP32H2: {}, # no ADC2
# https://github.com/espressif/esp-idf/blob/master/components/soc/esp32p4/include/soc/adc_channel.h
VARIANT_ESP32P4: {
49: adc_channel_t.ADC_CHANNEL_0,
50: adc_channel_t.ADC_CHANNEL_1,
51: adc_channel_t.ADC_CHANNEL_2,
52: adc_channel_t.ADC_CHANNEL_3,
53: adc_channel_t.ADC_CHANNEL_4,
54: adc_channel_t.ADC_CHANNEL_5,
},
# https://github.com/espressif/esp-idf/blob/master/components/soc/esp32s2/include/soc/adc_channel.h
VARIANT_ESP32S2: {
11: adc_channel_t.ADC_CHANNEL_0,
@@ -222,6 +208,14 @@ ESP32_VARIANT_ADC2_PIN_TO_CHANNEL = {
19: adc_channel_t.ADC_CHANNEL_8,
20: adc_channel_t.ADC_CHANNEL_9,
},
VARIANT_ESP32P4: {
49: adc_channel_t.ADC_CHANNEL_0,
50: adc_channel_t.ADC_CHANNEL_1,
51: adc_channel_t.ADC_CHANNEL_2,
52: adc_channel_t.ADC_CHANNEL_3,
53: adc_channel_t.ADC_CHANNEL_4,
54: adc_channel_t.ADC_CHANNEL_5,
},
}
@@ -273,9 +267,21 @@ def validate_adc_pin(value):
{CONF_ANALOG: True, CONF_INPUT: True}, internal=True
)(value)
if CORE.is_nrf52:
return pins.gpio_pin_schema(
{CONF_ANALOG: True, CONF_INPUT: True}, internal=True
)(value)
raise NotImplementedError
FILTER_SOURCE_FILES = filter_source_files_from_platform(
{
"adc_sensor_esp32.cpp": {
PlatformFramework.ESP32_ARDUINO,
PlatformFramework.ESP32_IDF,
},
"adc_sensor_esp8266.cpp": {PlatformFramework.ESP8266_ARDUINO},
"adc_sensor_rp2040.cpp": {PlatformFramework.RP2040_ARDUINO},
"adc_sensor_libretiny.cpp": {
PlatformFramework.BK72XX_ARDUINO,
PlatformFramework.RTL87XX_ARDUINO,
PlatformFramework.LN882X_ARDUINO,
},
}
)

View File

@@ -13,10 +13,6 @@
#include "hal/adc_types.h" // This defines ADC_CHANNEL_MAX
#endif // USE_ESP32
#ifdef USE_ZEPHYR
#include <zephyr/drivers/adc.h>
#endif
namespace esphome {
namespace adc {
@@ -42,15 +38,15 @@ enum class SamplingMode : uint8_t {
const LogString *sampling_mode_to_str(SamplingMode mode);
template<typename T> class Aggregator {
class Aggregator {
public:
Aggregator(SamplingMode mode);
void add_sample(T value);
T aggregate();
void add_sample(uint32_t value);
uint32_t aggregate();
protected:
T aggr_{0};
uint8_t samples_{0};
uint32_t aggr_{0};
uint32_t samples_{0};
SamplingMode mode_{SamplingMode::AVG};
};
@@ -73,11 +69,6 @@ class ADCSensor : public sensor::Sensor, public PollingComponent, public voltage
/// @return A float representing the setup priority.
float get_setup_priority() const override;
#ifdef USE_ZEPHYR
/// Set the ADC channel to be used by the ADC sensor.
/// @param channel Pointer to an adc_dt_spec structure representing the ADC channel.
void set_adc_channel(const adc_dt_spec *channel) { this->channel_ = channel; }
#endif
/// Set the GPIO pin to be used by the ADC sensor.
/// @param pin Pointer to an InternalGPIOPin representing the ADC input pin.
void set_pin(InternalGPIOPin *pin) { this->pin_ = pin; }
@@ -160,10 +151,6 @@ class ADCSensor : public sensor::Sensor, public PollingComponent, public voltage
#ifdef USE_RP2040
bool is_temperature_{false};
#endif // USE_RP2040
#ifdef USE_ZEPHYR
const struct adc_dt_spec *channel_ = nullptr;
#endif
};
} // namespace adc

View File

@@ -18,15 +18,15 @@ const LogString *sampling_mode_to_str(SamplingMode mode) {
return LOG_STR("unknown");
}
template<typename T> Aggregator<T>::Aggregator(SamplingMode mode) {
Aggregator::Aggregator(SamplingMode mode) {
this->mode_ = mode;
// set to max uint if mode is "min"
if (mode == SamplingMode::MIN) {
this->aggr_ = std::numeric_limits<T>::max();
this->aggr_ = UINT32_MAX;
}
}
template<typename T> void Aggregator<T>::add_sample(T value) {
void Aggregator::add_sample(uint32_t value) {
this->samples_ += 1;
switch (this->mode_) {
@@ -47,7 +47,7 @@ template<typename T> void Aggregator<T>::add_sample(T value) {
}
}
template<typename T> T Aggregator<T>::aggregate() {
uint32_t Aggregator::aggregate() {
if (this->mode_ == SamplingMode::AVG) {
if (this->samples_ == 0) {
return this->aggr_;
@@ -59,12 +59,6 @@ template<typename T> T Aggregator<T>::aggregate() {
return this->aggr_;
}
#ifdef USE_ZEPHYR
template class Aggregator<int32_t>;
#else
template class Aggregator<uint32_t>;
#endif
void ADCSensor::update() {
float value_v = this->sample();
ESP_LOGV(TAG, "'%s': Voltage=%.4fV", this->get_name().c_str(), value_v);

View File

@@ -42,11 +42,10 @@ void ADCSensor::setup() {
adc_oneshot_unit_init_cfg_t init_config = {}; // Zero initialize
init_config.unit_id = this->adc_unit_;
init_config.ulp_mode = ADC_ULP_MODE_DISABLE;
#if USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 || \
USE_ESP32_VARIANT_ESP32C61 || USE_ESP32_VARIANT_ESP32H2
#if USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 || USE_ESP32_VARIANT_ESP32H2
init_config.clk_src = ADC_DIGI_CLK_SRC_DEFAULT;
#endif // USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 ||
// USE_ESP32_VARIANT_ESP32C61 || USE_ESP32_VARIANT_ESP32H2
// USE_ESP32_VARIANT_ESP32H2
esp_err_t err = adc_oneshot_new_unit(&init_config, &ADCSensor::shared_adc_handles[this->adc_unit_]);
if (err != ESP_OK) {
ESP_LOGE(TAG, "Error initializing %s: %d", LOG_STR_ARG(adc_unit_to_str(this->adc_unit_)), err);
@@ -75,7 +74,7 @@ void ADCSensor::setup() {
adc_cali_handle_t handle = nullptr;
#if USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 || \
USE_ESP32_VARIANT_ESP32C61 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4 || USE_ESP32_VARIANT_ESP32S3
USE_ESP32_VARIANT_ESP32S3 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4
// RISC-V variants and S3 use curve fitting calibration
adc_cali_curve_fitting_config_t cali_config = {}; // Zero initialize first
#if ESP_IDF_VERSION >= ESP_IDF_VERSION_VAL(5, 3, 0)
@@ -112,7 +111,7 @@ void ADCSensor::setup() {
ESP_LOGW(TAG, "Line fitting calibration failed with error %d, will use uncalibrated readings", err);
this->setup_flags_.calibration_complete = false;
}
#endif // USE_ESP32_VARIANT_ESP32C3 || ESP32C5 || ESP32C6 || ESP32C61 || ESP32H2 || ESP32P4 || ESP32S3
#endif // USE_ESP32_VARIANT_ESP32C3 || ESP32C5 || ESP32C6 || ESP32S3 || ESP32H2
}
this->setup_flags_.init_complete = true;
@@ -121,21 +120,23 @@ void ADCSensor::setup() {
void ADCSensor::dump_config() {
LOG_SENSOR("", "ADC Sensor", this);
LOG_PIN(" Pin: ", this->pin_);
ESP_LOGCONFIG(TAG,
" Channel: %d\n"
" Unit: %s\n"
" Attenuation: %s\n"
" Samples: %i\n"
" Sampling mode: %s",
this->channel_, LOG_STR_ARG(adc_unit_to_str(this->adc_unit_)),
this->autorange_ ? "Auto" : LOG_STR_ARG(attenuation_to_str(this->attenuation_)), this->sample_count_,
LOG_STR_ARG(sampling_mode_to_str(this->sampling_mode_)));
ESP_LOGCONFIG(
TAG,
" Channel: %d\n"
" Unit: %s\n"
" Attenuation: %s\n"
" Samples: %i\n"
" Sampling mode: %s\n"
" Setup Status:\n"
" Handle Init: %s\n"
" Config: %s\n"
" Calibration: %s\n"
" Overall Init: %s",
this->channel_, LOG_STR_ARG(adc_unit_to_str(this->adc_unit_)),
this->autorange_ ? "Auto" : LOG_STR_ARG(attenuation_to_str(this->attenuation_)), this->sample_count_,
LOG_STR_ARG(sampling_mode_to_str(this->sampling_mode_)),
this->setup_flags_.handle_init_complete ? "OK" : "FAILED", this->setup_flags_.config_complete ? "OK" : "FAILED",
this->setup_flags_.calibration_complete ? "OK" : "FAILED", this->setup_flags_.init_complete ? "OK" : "FAILED");
@@ -151,7 +152,7 @@ float ADCSensor::sample() {
}
float ADCSensor::sample_fixed_attenuation_() {
auto aggr = Aggregator<uint32_t>(this->sampling_mode_);
auto aggr = Aggregator(this->sampling_mode_);
for (uint8_t sample = 0; sample < this->sample_count_; sample++) {
int raw;
@@ -185,11 +186,11 @@ float ADCSensor::sample_fixed_attenuation_() {
ESP_LOGW(TAG, "ADC calibration conversion failed with error %d, disabling calibration", err);
if (this->calibration_handle_ != nullptr) {
#if USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 || \
USE_ESP32_VARIANT_ESP32C61 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4 || USE_ESP32_VARIANT_ESP32S3
USE_ESP32_VARIANT_ESP32S3 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4
adc_cali_delete_scheme_curve_fitting(this->calibration_handle_);
#else // Other ESP32 variants use line fitting calibration
adc_cali_delete_scheme_line_fitting(this->calibration_handle_);
#endif // USE_ESP32_VARIANT_ESP32C3 || ESP32C5 || ESP32C6 || ESP32C61 || ESP32H2 || ESP32P4 || ESP32S3
#endif // USE_ESP32_VARIANT_ESP32C3 || ESP32C5 || ESP32C6 || ESP32S3 || ESP32H2
this->calibration_handle_ = nullptr;
}
}
@@ -218,7 +219,7 @@ float ADCSensor::sample_autorange_() {
if (this->calibration_handle_ != nullptr) {
// Delete old calibration handle
#if USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 || \
USE_ESP32_VARIANT_ESP32C61 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4 || USE_ESP32_VARIANT_ESP32S3
USE_ESP32_VARIANT_ESP32S3 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4
adc_cali_delete_scheme_curve_fitting(this->calibration_handle_);
#else
adc_cali_delete_scheme_line_fitting(this->calibration_handle_);
@@ -230,7 +231,7 @@ float ADCSensor::sample_autorange_() {
adc_cali_handle_t handle = nullptr;
#if USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 || \
USE_ESP32_VARIANT_ESP32C61 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4 || USE_ESP32_VARIANT_ESP32S3
USE_ESP32_VARIANT_ESP32S3 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4
adc_cali_curve_fitting_config_t cali_config = {};
#if ESP_IDF_VERSION >= ESP_IDF_VERSION_VAL(5, 3, 0)
cali_config.chan = this->channel_;
@@ -240,8 +241,6 @@ float ADCSensor::sample_autorange_() {
cali_config.bitwidth = ADC_BITWIDTH_DEFAULT;
err = adc_cali_create_scheme_curve_fitting(&cali_config, &handle);
ESP_LOGVV(TAG, "Autorange atten=%d: Calibration handle creation %s (err=%d)", atten,
(err == ESP_OK) ? "SUCCESS" : "FAILED", err);
#else
adc_cali_line_fitting_config_t cali_config = {
.unit_id = this->adc_unit_,
@@ -252,20 +251,16 @@ float ADCSensor::sample_autorange_() {
#endif
};
err = adc_cali_create_scheme_line_fitting(&cali_config, &handle);
ESP_LOGVV(TAG, "Autorange atten=%d: Calibration handle creation %s (err=%d)", atten,
(err == ESP_OK) ? "SUCCESS" : "FAILED", err);
#endif
int raw;
err = adc_oneshot_read(this->adc_handle_, this->channel_, &raw);
ESP_LOGVV(TAG, "Autorange atten=%d: Raw ADC read %s, value=%d (err=%d)", atten,
(err == ESP_OK) ? "SUCCESS" : "FAILED", raw, err);
if (err != ESP_OK) {
ESP_LOGW(TAG, "ADC read failed in autorange with error %d", err);
if (handle != nullptr) {
#if USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 || \
USE_ESP32_VARIANT_ESP32C61 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4 || USE_ESP32_VARIANT_ESP32S3
USE_ESP32_VARIANT_ESP32S3 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4
adc_cali_delete_scheme_curve_fitting(handle);
#else
adc_cali_delete_scheme_line_fitting(handle);
@@ -280,21 +275,18 @@ float ADCSensor::sample_autorange_() {
err = adc_cali_raw_to_voltage(handle, raw, &voltage_mv);
if (err == ESP_OK) {
voltage = voltage_mv / 1000.0f;
ESP_LOGVV(TAG, "Autorange atten=%d: CALIBRATED - raw=%d -> %dmV -> %.6fV", atten, raw, voltage_mv, voltage);
} else {
voltage = raw * 3.3f / 4095.0f;
ESP_LOGVV(TAG, "Autorange atten=%d: UNCALIBRATED FALLBACK - raw=%d -> %.6fV (3.3V ref)", atten, raw, voltage);
}
// Clean up calibration handle
#if USE_ESP32_VARIANT_ESP32C3 || USE_ESP32_VARIANT_ESP32C5 || USE_ESP32_VARIANT_ESP32C6 || \
USE_ESP32_VARIANT_ESP32C61 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4 || USE_ESP32_VARIANT_ESP32S3
USE_ESP32_VARIANT_ESP32S3 || USE_ESP32_VARIANT_ESP32H2 || USE_ESP32_VARIANT_ESP32P4
adc_cali_delete_scheme_curve_fitting(handle);
#else
adc_cali_delete_scheme_line_fitting(handle);
#endif
} else {
voltage = raw * 3.3f / 4095.0f;
ESP_LOGVV(TAG, "Autorange atten=%d: NO CALIBRATION - raw=%d -> %.6fV (3.3V ref)", atten, raw, voltage);
}
return {raw, voltage};
@@ -332,32 +324,18 @@ float ADCSensor::sample_autorange_() {
}
const int adc_half = 2048;
const uint32_t c12 = std::min(raw12, adc_half);
const int32_t c6_signed = adc_half - std::abs(raw6 - adc_half);
const uint32_t c6 = (c6_signed > 0) ? c6_signed : 0; // Clamp to prevent underflow
const int32_t c2_signed = adc_half - std::abs(raw2 - adc_half);
const uint32_t c2 = (c2_signed > 0) ? c2_signed : 0; // Clamp to prevent underflow
const uint32_t c0 = std::min(4095 - raw0, adc_half);
const uint32_t csum = c12 + c6 + c2 + c0;
ESP_LOGVV(TAG, "Autorange summary:");
ESP_LOGVV(TAG, " Raw readings: 12db=%d, 6db=%d, 2.5db=%d, 0db=%d", raw12, raw6, raw2, raw0);
ESP_LOGVV(TAG, " Voltages: 12db=%.6f, 6db=%.6f, 2.5db=%.6f, 0db=%.6f", mv12, mv6, mv2, mv0);
ESP_LOGVV(TAG, " Coefficients: c12=%u, c6=%u, c2=%u, c0=%u, sum=%u", c12, c6, c2, c0, csum);
uint32_t c12 = std::min(raw12, adc_half);
uint32_t c6 = adc_half - std::abs(raw6 - adc_half);
uint32_t c2 = adc_half - std::abs(raw2 - adc_half);
uint32_t c0 = std::min(4095 - raw0, adc_half);
uint32_t csum = c12 + c6 + c2 + c0;
if (csum == 0) {
ESP_LOGE(TAG, "Invalid weight sum in autorange calculation");
return NAN;
}
const float final_result = (mv12 * c12 + mv6 * c6 + mv2 * c2 + mv0 * c0) / csum;
ESP_LOGV(TAG, "Autorange final: (%.6f*%u + %.6f*%u + %.6f*%u + %.6f*%u)/%u = %.6fV", mv12, c12, mv6, c6, mv2, c2, mv0,
c0, csum, final_result);
return final_result;
return (mv12 * c12 + mv6 * c6 + mv2 * c2 + mv0 * c0) / csum;
}
} // namespace adc

View File

@@ -37,7 +37,7 @@ void ADCSensor::dump_config() {
}
float ADCSensor::sample() {
auto aggr = Aggregator<uint32_t>(this->sampling_mode_);
auto aggr = Aggregator(this->sampling_mode_);
for (uint8_t sample = 0; sample < this->sample_count_; sample++) {
uint32_t raw = 0;

View File

@@ -30,7 +30,7 @@ void ADCSensor::dump_config() {
float ADCSensor::sample() {
uint32_t raw = 0;
auto aggr = Aggregator<uint32_t>(this->sampling_mode_);
auto aggr = Aggregator(this->sampling_mode_);
if (this->output_raw_) {
for (uint8_t sample = 0; sample < this->sample_count_; sample++) {

View File

@@ -41,7 +41,7 @@ void ADCSensor::dump_config() {
float ADCSensor::sample() {
uint32_t raw = 0;
auto aggr = Aggregator<uint32_t>(this->sampling_mode_);
auto aggr = Aggregator(this->sampling_mode_);
if (this->is_temperature_) {
adc_set_temp_sensor_enabled(true);

View File

@@ -1,207 +0,0 @@
#include "adc_sensor.h"
#ifdef USE_ZEPHYR
#include "esphome/core/log.h"
#include "hal/nrf_saadc.h"
namespace esphome {
namespace adc {
static const char *const TAG = "adc.zephyr";
void ADCSensor::setup() {
if (!adc_is_ready_dt(this->channel_)) {
ESP_LOGE(TAG, "ADC controller device %s not ready", this->channel_->dev->name);
return;
}
auto err = adc_channel_setup_dt(this->channel_);
if (err < 0) {
ESP_LOGE(TAG, "Could not setup channel %s (%d)", this->channel_->dev->name, err);
return;
}
}
#if ESPHOME_LOG_LEVEL >= ESPHOME_LOG_LEVEL_VERBOSE
static const LogString *gain_to_str(enum adc_gain gain) {
switch (gain) {
case ADC_GAIN_1_6:
return LOG_STR("1/6");
case ADC_GAIN_1_5:
return LOG_STR("1/5");
case ADC_GAIN_1_4:
return LOG_STR("1/4");
case ADC_GAIN_1_3:
return LOG_STR("1/3");
case ADC_GAIN_2_5:
return LOG_STR("2/5");
case ADC_GAIN_1_2:
return LOG_STR("1/2");
case ADC_GAIN_2_3:
return LOG_STR("2/3");
case ADC_GAIN_4_5:
return LOG_STR("4/5");
case ADC_GAIN_1:
return LOG_STR("1");
case ADC_GAIN_2:
return LOG_STR("2");
case ADC_GAIN_3:
return LOG_STR("3");
case ADC_GAIN_4:
return LOG_STR("4");
case ADC_GAIN_6:
return LOG_STR("6");
case ADC_GAIN_8:
return LOG_STR("8");
case ADC_GAIN_12:
return LOG_STR("12");
case ADC_GAIN_16:
return LOG_STR("16");
case ADC_GAIN_24:
return LOG_STR("24");
case ADC_GAIN_32:
return LOG_STR("32");
case ADC_GAIN_64:
return LOG_STR("64");
case ADC_GAIN_128:
return LOG_STR("128");
}
return LOG_STR("undefined gain");
}
static const LogString *reference_to_str(enum adc_reference reference) {
switch (reference) {
case ADC_REF_VDD_1:
return LOG_STR("VDD");
case ADC_REF_VDD_1_2:
return LOG_STR("VDD/2");
case ADC_REF_VDD_1_3:
return LOG_STR("VDD/3");
case ADC_REF_VDD_1_4:
return LOG_STR("VDD/4");
case ADC_REF_INTERNAL:
return LOG_STR("INTERNAL");
case ADC_REF_EXTERNAL0:
return LOG_STR("External, input 0");
case ADC_REF_EXTERNAL1:
return LOG_STR("External, input 1");
}
return LOG_STR("undefined reference");
}
static const LogString *input_to_str(uint8_t input) {
switch (input) {
case NRF_SAADC_INPUT_AIN0:
return LOG_STR("AIN0");
case NRF_SAADC_INPUT_AIN1:
return LOG_STR("AIN1");
case NRF_SAADC_INPUT_AIN2:
return LOG_STR("AIN2");
case NRF_SAADC_INPUT_AIN3:
return LOG_STR("AIN3");
case NRF_SAADC_INPUT_AIN4:
return LOG_STR("AIN4");
case NRF_SAADC_INPUT_AIN5:
return LOG_STR("AIN5");
case NRF_SAADC_INPUT_AIN6:
return LOG_STR("AIN6");
case NRF_SAADC_INPUT_AIN7:
return LOG_STR("AIN7");
case NRF_SAADC_INPUT_VDD:
return LOG_STR("VDD");
case NRF_SAADC_INPUT_VDDHDIV5:
return LOG_STR("VDDHDIV5");
}
return LOG_STR("undefined input");
}
#endif // ESPHOME_LOG_LEVEL >= ESPHOME_LOG_LEVEL_VERBOSE
void ADCSensor::dump_config() {
LOG_SENSOR("", "ADC Sensor", this);
LOG_PIN(" Pin: ", this->pin_);
#if ESPHOME_LOG_LEVEL >= ESPHOME_LOG_LEVEL_VERBOSE
ESP_LOGV(TAG,
" Name: %s\n"
" Channel: %d\n"
" vref_mv: %d\n"
" Resolution %d\n"
" Oversampling %d",
this->channel_->dev->name, this->channel_->channel_id, this->channel_->vref_mv, this->channel_->resolution,
this->channel_->oversampling);
ESP_LOGV(TAG,
" Gain: %s\n"
" reference: %s\n"
" acquisition_time: %d\n"
" differential %s",
LOG_STR_ARG(gain_to_str(this->channel_->channel_cfg.gain)),
LOG_STR_ARG(reference_to_str(this->channel_->channel_cfg.reference)),
this->channel_->channel_cfg.acquisition_time, YESNO(this->channel_->channel_cfg.differential));
if (this->channel_->channel_cfg.differential) {
ESP_LOGV(TAG,
" Positive: %s\n"
" Negative: %s",
LOG_STR_ARG(input_to_str(this->channel_->channel_cfg.input_positive)),
LOG_STR_ARG(input_to_str(this->channel_->channel_cfg.input_negative)));
} else {
ESP_LOGV(TAG, " Positive: %s", LOG_STR_ARG(input_to_str(this->channel_->channel_cfg.input_positive)));
}
#endif
LOG_UPDATE_INTERVAL(this);
}
float ADCSensor::sample() {
auto aggr = Aggregator<int32_t>(this->sampling_mode_);
int err;
for (uint8_t sample = 0; sample < this->sample_count_; sample++) {
int16_t buf = 0;
struct adc_sequence sequence = {
.buffer = &buf,
/* buffer size in bytes, not number of samples */
.buffer_size = sizeof(buf),
};
int32_t val_raw;
err = adc_sequence_init_dt(this->channel_, &sequence);
if (err < 0) {
ESP_LOGE(TAG, "Could sequence init %s (%d)", this->channel_->dev->name, err);
return 0.0;
}
err = adc_read(this->channel_->dev, &sequence);
if (err < 0) {
ESP_LOGE(TAG, "Could not read %s (%d)", this->channel_->dev->name, err);
return 0.0;
}
val_raw = (int32_t) buf;
if (!this->channel_->channel_cfg.differential) {
// https://github.com/adafruit/Adafruit_nRF52_Arduino/blob/0ed4d9ffc674ae407be7cacf5696a02f5e789861/cores/nRF5/wiring_analog_nRF52.c#L222
if (val_raw < 0) {
val_raw = 0;
}
}
aggr.add_sample(val_raw);
}
int32_t val_mv = aggr.aggregate();
if (this->output_raw_) {
return val_mv;
}
err = adc_raw_to_millivolts_dt(this->channel_, &val_mv);
/* conversion to mV may not be supported, skip if not */
if (err < 0) {
ESP_LOGE(TAG, "Value in mV not available %s (%d)", this->channel_->dev->name, err);
return 0.0;
}
return val_mv / 1000.0f;
}
} // namespace adc
} // namespace esphome
#endif

View File

@@ -3,13 +3,6 @@ import logging
import esphome.codegen as cg
from esphome.components import sensor, voltage_sampler
from esphome.components.esp32 import get_esp32_variant
from esphome.components.nrf52.const import AIN_TO_GPIO, EXTRA_ADC
from esphome.components.zephyr import (
zephyr_add_overlay,
zephyr_add_prj_conf,
zephyr_add_user,
)
from esphome.config_helpers import filter_source_files_from_platform
import esphome.config_validation as cv
from esphome.const import (
CONF_ATTENUATION,
@@ -18,10 +11,8 @@ from esphome.const import (
CONF_PIN,
CONF_RAW,
DEVICE_CLASS_VOLTAGE,
PLATFORM_NRF52,
STATE_CLASS_MEASUREMENT,
UNIT_VOLT,
PlatformFramework,
)
from esphome.core import CORE
@@ -69,10 +60,6 @@ ADCSensor = adc_ns.class_(
"ADCSensor", sensor.Sensor, cg.PollingComponent, voltage_sampler.VoltageSampler
)
CONF_NRF_SAADC = "nrf_saadc"
adc_dt_spec = cg.global_ns.class_("adc_dt_spec")
CONFIG_SCHEMA = cv.All(
sensor.sensor_schema(
ADCSensor,
@@ -88,7 +75,6 @@ CONFIG_SCHEMA = cv.All(
cv.SplitDefault(CONF_ATTENUATION, esp32="0db"): cv.All(
cv.only_on_esp32, _attenuation
),
cv.OnlyWith(CONF_NRF_SAADC, PLATFORM_NRF52): cv.declare_id(adc_dt_spec),
cv.Optional(CONF_SAMPLES, default=1): cv.int_range(min=1, max=255),
cv.Optional(CONF_SAMPLING_MODE, default="avg"): _sampling_mode,
}
@@ -97,8 +83,6 @@ CONFIG_SCHEMA = cv.All(
validate_config,
)
CONF_ADC_CHANNEL_ID = "adc_channel_id"
async def to_code(config):
var = cg.new_Pvariable(config[CONF_ID])
@@ -109,7 +93,7 @@ async def to_code(config):
cg.add_define("USE_ADC_SENSOR_VCC")
elif config[CONF_PIN] == "TEMPERATURE":
cg.add(var.set_is_temperature())
elif not CORE.is_nrf52 or config[CONF_PIN][CONF_NUMBER] not in EXTRA_ADC:
else:
pin = await cg.gpio_pin_expression(config[CONF_PIN])
cg.add(var.set_pin(pin))
@@ -138,59 +122,3 @@ async def to_code(config):
):
chan = ESP32_VARIANT_ADC2_PIN_TO_CHANNEL[variant][pin_num]
cg.add(var.set_channel(adc_unit_t.ADC_UNIT_2, chan))
elif CORE.is_nrf52:
CORE.data.setdefault(CONF_ADC_CHANNEL_ID, 0)
channel_id = CORE.data[CONF_ADC_CHANNEL_ID]
CORE.data[CONF_ADC_CHANNEL_ID] = channel_id + 1
zephyr_add_prj_conf("ADC", True)
nrf_saadc = config[CONF_NRF_SAADC]
rhs = cg.RawExpression(
f"ADC_DT_SPEC_GET_BY_IDX(DT_PATH(zephyr_user), {channel_id})"
)
adc = cg.new_Pvariable(nrf_saadc, rhs)
cg.add(var.set_adc_channel(adc))
gain = "ADC_GAIN_1_6"
pin_number = config[CONF_PIN][CONF_NUMBER]
if pin_number == "VDDHDIV5":
gain = "ADC_GAIN_1_2"
if isinstance(pin_number, int):
GPIO_TO_AIN = {v: k for k, v in AIN_TO_GPIO.items()}
pin_number = GPIO_TO_AIN[pin_number]
zephyr_add_user("io-channels", f"<&adc {channel_id}>")
zephyr_add_overlay(
f"""
&adc {{
#address-cells = <1>;
#size-cells = <0>;
channel@{channel_id} {{
reg = <{channel_id}>;
zephyr,gain = "{gain}";
zephyr,reference = "ADC_REF_INTERNAL";
zephyr,acquisition-time = <ADC_ACQ_TIME_DEFAULT>;
zephyr,input-positive = <NRF_SAADC_{pin_number}>;
zephyr,resolution = <14>;
zephyr,oversampling = <8>;
}};
}};
"""
)
FILTER_SOURCE_FILES = filter_source_files_from_platform(
{
"adc_sensor_esp32.cpp": {
PlatformFramework.ESP32_ARDUINO,
PlatformFramework.ESP32_IDF,
},
"adc_sensor_esp8266.cpp": {PlatformFramework.ESP8266_ARDUINO},
"adc_sensor_rp2040.cpp": {PlatformFramework.RP2040_ARDUINO},
"adc_sensor_libretiny.cpp": {
PlatformFramework.BK72XX_ARDUINO,
PlatformFramework.RTL87XX_ARDUINO,
PlatformFramework.LN882X_ARDUINO,
},
"adc_sensor_zephyr.cpp": {PlatformFramework.NRF52_ZEPHYR},
}
)

View File

@@ -25,13 +25,11 @@ class AddressableLightDisplay : public display::DisplayBuffer {
if (enabled_ && !enabled) { // enabled -> disabled
// - Tell the parent light to refresh, effectively wiping the display. Also
// restores the previous effect (if any).
if (this->last_effect_index_.has_value()) {
light_state_->make_call().set_effect(*this->last_effect_index_).perform();
}
light_state_->make_call().set_effect(this->last_effect_).perform();
} else if (!enabled_ && enabled) { // disabled -> enabled
// - Save the current effect index.
this->last_effect_index_ = light_state_->get_current_effect_index();
// - Save the current effect.
this->last_effect_ = light_state_->get_effect_name();
// - Disable any current effect.
light_state_->make_call().set_effect(0).perform();
}
@@ -58,7 +56,7 @@ class AddressableLightDisplay : public display::DisplayBuffer {
int32_t width_;
int32_t height_;
std::vector<Color> addressable_light_buffer_;
optional<uint32_t> last_effect_index_;
optional<std::string> last_effect_;
optional<std::function<int(int, int)>> pixel_mapper_f_;
};
} // namespace addressable_light

View File

@@ -113,7 +113,7 @@ void ADE7880::update() {
if (this->channel_a_ != nullptr) {
auto *chan = this->channel_a_;
this->update_sensor_from_s24zp_register16_(chan->current, AIRMS, [](float val) { return val / 100000.0f; });
this->update_sensor_from_s24zp_register16_(chan->voltage, AVRMS, [](float val) { return val / 10000.0f; });
this->update_sensor_from_s24zp_register16_(chan->voltage, BVRMS, [](float val) { return val / 10000.0f; });
this->update_sensor_from_s24zp_register16_(chan->active_power, AWATT, [](float val) { return val / 100.0f; });
this->update_sensor_from_s24zp_register16_(chan->apparent_power, AVA, [](float val) { return val / 100.0f; });
this->update_sensor_from_s16_register16_(chan->power_factor, APF,
@@ -162,13 +162,11 @@ void ADE7880::update() {
}
void ADE7880::dump_config() {
ESP_LOGCONFIG(TAG,
"ADE7880:\n"
" Frequency: %.0f Hz",
this->frequency_);
ESP_LOGCONFIG(TAG, "ADE7880:");
LOG_PIN(" IRQ0 Pin: ", this->irq0_pin_);
LOG_PIN(" IRQ1 Pin: ", this->irq1_pin_);
LOG_PIN(" RESET Pin: ", this->reset_pin_);
ESP_LOGCONFIG(TAG, " Frequency: %.0f Hz", this->frequency_);
if (this->channel_a_ != nullptr) {
ESP_LOGCONFIG(TAG, " Phase A:");

View File

@@ -36,7 +36,6 @@ from esphome.const import (
UNIT_WATT,
UNIT_WATT_HOURS,
)
from esphome.types import ConfigType
DEPENDENCIES = ["i2c"]
@@ -52,20 +51,6 @@ CONF_POWER_GAIN = "power_gain"
CONF_NEUTRAL = "neutral"
# Tuple of power channel phases
POWER_PHASES = (CONF_PHASE_A, CONF_PHASE_B, CONF_PHASE_C)
# Tuple of sensor types that can be configured for power channels
POWER_SENSOR_TYPES = (
CONF_CURRENT,
CONF_VOLTAGE,
CONF_ACTIVE_POWER,
CONF_APPARENT_POWER,
CONF_POWER_FACTOR,
CONF_FORWARD_ACTIVE_ENERGY,
CONF_REVERSE_ACTIVE_ENERGY,
)
NEUTRAL_CHANNEL_SCHEMA = cv.Schema(
{
cv.GenerateID(): cv.declare_id(NeutralChannel),
@@ -165,69 +150,12 @@ POWER_CHANNEL_SCHEMA = cv.Schema(
}
)
def prefix_sensor_name(
sensor_conf: ConfigType,
channel_name: str,
channel_config: ConfigType,
sensor_type: str,
) -> None:
"""Helper to prefix sensor name with channel name.
Args:
sensor_conf: The sensor configuration (dict or string)
channel_name: The channel name to prefix with
channel_config: The channel configuration to update
sensor_type: The sensor type key in the channel config
"""
if isinstance(sensor_conf, dict) and CONF_NAME in sensor_conf:
sensor_name = sensor_conf[CONF_NAME]
if sensor_name and not sensor_name.startswith(channel_name):
sensor_conf[CONF_NAME] = f"{channel_name} {sensor_name}"
elif isinstance(sensor_conf, str):
# Simple value case - convert to dict with prefixed name
channel_config[sensor_type] = {CONF_NAME: f"{channel_name} {sensor_conf}"}
def process_channel_sensors(
config: ConfigType, channel_key: str, sensor_types: tuple
) -> None:
"""Process sensors for a channel and prefix their names.
Args:
config: The main configuration
channel_key: The channel key (e.g., CONF_PHASE_A, CONF_NEUTRAL)
sensor_types: Tuple of sensor types to process for this channel
"""
if not (channel_config := config.get(channel_key)) or not (
channel_name := channel_config.get(CONF_NAME)
):
return
for sensor_type in sensor_types:
if sensor_conf := channel_config.get(sensor_type):
prefix_sensor_name(sensor_conf, channel_name, channel_config, sensor_type)
def preprocess_channels(config: ConfigType) -> ConfigType:
"""Preprocess channel configurations to add channel name prefix to sensor names."""
# Process power channels
for channel in POWER_PHASES:
process_channel_sensors(config, channel, POWER_SENSOR_TYPES)
# Process neutral channel
process_channel_sensors(config, CONF_NEUTRAL, (CONF_CURRENT,))
return config
CONFIG_SCHEMA = cv.All(
preprocess_channels,
CONFIG_SCHEMA = (
cv.Schema(
{
cv.GenerateID(): cv.declare_id(ADE7880),
cv.Optional(CONF_FREQUENCY, default="50Hz"): cv.All(
cv.frequency, cv.float_range(min=45.0, max=66.0)
cv.frequency, cv.Range(min=45.0, max=66.0)
),
cv.Optional(CONF_IRQ0_PIN): pins.internal_gpio_input_pin_schema,
cv.Required(CONF_IRQ1_PIN): pins.internal_gpio_input_pin_schema,
@@ -239,7 +167,7 @@ CONFIG_SCHEMA = cv.All(
}
)
.extend(cv.polling_component_schema("60s"))
.extend(i2c.i2c_device_schema(0x38)),
.extend(i2c.i2c_device_schema(0x38))
)
@@ -260,7 +188,15 @@ async def neutral_channel(config):
async def power_channel(config):
var = cg.new_Pvariable(config[CONF_ID])
for sensor_type in POWER_SENSOR_TYPES:
for sensor_type in [
CONF_CURRENT,
CONF_VOLTAGE,
CONF_ACTIVE_POWER,
CONF_APPARENT_POWER,
CONF_POWER_FACTOR,
CONF_FORWARD_ACTIVE_ENERGY,
CONF_REVERSE_ACTIVE_ENERGY,
]:
if conf := config.get(sensor_type):
sens = await sensor.new_sensor(conf)
cg.add(getattr(var, f"set_{sensor_type}")(sens))
@@ -280,6 +216,44 @@ async def power_channel(config):
return var
def final_validate(config):
for channel in [CONF_PHASE_A, CONF_PHASE_B, CONF_PHASE_C]:
if channel := config.get(channel):
channel_name = channel.get(CONF_NAME)
for sensor_type in [
CONF_CURRENT,
CONF_VOLTAGE,
CONF_ACTIVE_POWER,
CONF_APPARENT_POWER,
CONF_POWER_FACTOR,
CONF_FORWARD_ACTIVE_ENERGY,
CONF_REVERSE_ACTIVE_ENERGY,
]:
if conf := channel.get(sensor_type):
sensor_name = conf.get(CONF_NAME)
if (
sensor_name
and channel_name
and not sensor_name.startswith(channel_name)
):
conf[CONF_NAME] = f"{channel_name} {sensor_name}"
if channel := config.get(CONF_NEUTRAL):
channel_name = channel.get(CONF_NAME)
if conf := channel.get(CONF_CURRENT):
sensor_name = conf.get(CONF_NAME)
if (
sensor_name
and channel_name
and not sensor_name.startswith(channel_name)
):
conf[CONF_NAME] = f"{channel_name} {sensor_name}"
FINAL_VALIDATE_SCHEMA = final_validate
async def to_code(config):
var = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(var, config)

View File

@@ -24,8 +24,6 @@ from esphome.const import (
UNIT_WATT,
)
CODEOWNERS = ["@angelnu"]
CONF_CURRENT_A = "current_a"
CONF_CURRENT_B = "current_b"
CONF_ACTIVE_POWER_A = "active_power_a"

View File

@@ -25,8 +25,7 @@ void ADE7953::setup() {
this->ade_write_8(PGA_V_8, pga_v_);
this->ade_write_8(PGA_IA_8, pga_ia_);
this->ade_write_8(PGA_IB_8, pga_ib_);
this->ade_write_32(AVGAIN_32, avgain_);
this->ade_write_32(BVGAIN_32, bvgain_);
this->ade_write_32(AVGAIN_32, vgain_);
this->ade_write_32(AIGAIN_32, aigain_);
this->ade_write_32(BIGAIN_32, bigain_);
this->ade_write_32(AWGAIN_32, awgain_);
@@ -35,8 +34,7 @@ void ADE7953::setup() {
this->ade_read_8(PGA_V_8, &pga_v_);
this->ade_read_8(PGA_IA_8, &pga_ia_);
this->ade_read_8(PGA_IB_8, &pga_ib_);
this->ade_read_32(AVGAIN_32, &avgain_);
this->ade_read_32(BVGAIN_32, &bvgain_);
this->ade_read_32(AVGAIN_32, &vgain_);
this->ade_read_32(AIGAIN_32, &aigain_);
this->ade_read_32(BIGAIN_32, &bigain_);
this->ade_read_32(AWGAIN_32, &awgain_);
@@ -65,14 +63,13 @@ void ADE7953::dump_config() {
" PGA_V_8: 0x%X\n"
" PGA_IA_8: 0x%X\n"
" PGA_IB_8: 0x%X\n"
" AVGAIN_32: 0x%08jX\n"
" BVGAIN_32: 0x%08jX\n"
" VGAIN_32: 0x%08jX\n"
" AIGAIN_32: 0x%08jX\n"
" BIGAIN_32: 0x%08jX\n"
" AWGAIN_32: 0x%08jX\n"
" BWGAIN_32: 0x%08jX",
this->use_acc_energy_regs_, pga_v_, pga_ia_, pga_ib_, (uintmax_t) avgain_, (uintmax_t) bvgain_,
(uintmax_t) aigain_, (uintmax_t) bigain_, (uintmax_t) awgain_, (uintmax_t) bwgain_);
this->use_acc_energy_regs_, pga_v_, pga_ia_, pga_ib_, (uintmax_t) vgain_, (uintmax_t) aigain_,
(uintmax_t) bigain_, (uintmax_t) awgain_, (uintmax_t) bwgain_);
}
#define ADE_PUBLISH_(name, val, factor) \

View File

@@ -46,12 +46,7 @@ class ADE7953 : public PollingComponent, public sensor::Sensor {
void set_pga_ib(uint8_t pga_ib) { pga_ib_ = pga_ib; }
// Set input gains
void set_vgain(uint32_t vgain) {
// Datasheet says: "to avoid discrepancies in other registers,
// if AVGAIN is set then BVGAIN should be set to the same value."
avgain_ = vgain;
bvgain_ = vgain;
}
void set_vgain(uint32_t vgain) { vgain_ = vgain; }
void set_aigain(uint32_t aigain) { aigain_ = aigain; }
void set_bigain(uint32_t bigain) { bigain_ = bigain; }
void set_awgain(uint32_t awgain) { awgain_ = awgain; }
@@ -105,8 +100,7 @@ class ADE7953 : public PollingComponent, public sensor::Sensor {
uint8_t pga_v_;
uint8_t pga_ia_;
uint8_t pga_ib_;
uint32_t avgain_;
uint32_t bvgain_;
uint32_t vgain_;
uint32_t aigain_;
uint32_t bigain_;
uint32_t awgain_;

View File

@@ -21,12 +21,10 @@ void ADS1115Sensor::update() {
void ADS1115Sensor::dump_config() {
LOG_SENSOR(" ", "ADS1115 Sensor", this);
ESP_LOGCONFIG(TAG,
" Multiplexer: %u\n"
" Gain: %u\n"
" Resolution: %u\n"
" Sample rate: %u",
this->multiplexer_, this->gain_, this->resolution_, this->samplerate_);
ESP_LOGCONFIG(TAG, " Multiplexer: %u", this->multiplexer_);
ESP_LOGCONFIG(TAG, " Gain: %u", this->gain_);
ESP_LOGCONFIG(TAG, " Resolution: %u", this->resolution_);
ESP_LOGCONFIG(TAG, " Sample rate: %u", this->samplerate_);
}
} // namespace ads1115

View File

@@ -9,10 +9,8 @@ static const char *const TAG = "ads1118.sensor";
void ADS1118Sensor::dump_config() {
LOG_SENSOR(" ", "ADS1118 Sensor", this);
ESP_LOGCONFIG(TAG,
" Multiplexer: %u\n"
" Gain: %u",
this->multiplexer_, this->gain_);
ESP_LOGCONFIG(TAG, " Multiplexer: %u", this->multiplexer_);
ESP_LOGCONFIG(TAG, " Gain: %u", this->gain_);
}
float ADS1118Sensor::sample() {

View File

@@ -89,7 +89,7 @@ void AGS10Component::dump_config() {
bool AGS10Component::new_i2c_address(uint8_t newaddress) {
uint8_t rev_newaddress = ~newaddress;
std::array<uint8_t, 5> data{newaddress, rev_newaddress, newaddress, rev_newaddress, 0};
data[4] = crc8(data.data(), 4, 0xFF, 0x31, true);
data[4] = calc_crc8_(data, 4);
if (!this->write_bytes(REG_ADDRESS, data)) {
this->error_code_ = COMMUNICATION_FAILED;
this->status_set_warning();
@@ -109,7 +109,7 @@ bool AGS10Component::set_zero_point_with_current_resistance() { return this->set
bool AGS10Component::set_zero_point_with(uint16_t value) {
std::array<uint8_t, 5> data{0x00, 0x0C, (uint8_t) ((value >> 8) & 0xFF), (uint8_t) (value & 0xFF), 0};
data[4] = crc8(data.data(), 4, 0xFF, 0x31, true);
data[4] = calc_crc8_(data, 4);
if (!this->write_bytes(REG_CALIBRATION, data)) {
this->error_code_ = COMMUNICATION_FAILED;
this->status_set_warning();
@@ -184,7 +184,7 @@ template<size_t N> optional<std::array<uint8_t, N>> AGS10Component::read_and_che
auto res = *data;
auto crc_byte = res[len];
if (crc_byte != crc8(res.data(), len, 0xFF, 0x31, true)) {
if (crc_byte != calc_crc8_(res, len)) {
this->error_code_ = CRC_CHECK_FAILED;
ESP_LOGE(TAG, "Reading AGS10 version failed: crc error!");
return optional<std::array<uint8_t, N>>();
@@ -192,5 +192,20 @@ template<size_t N> optional<std::array<uint8_t, N>> AGS10Component::read_and_che
return data;
}
template<size_t N> uint8_t AGS10Component::calc_crc8_(std::array<uint8_t, N> dat, uint8_t num) {
uint8_t i, byte1, crc = 0xFF;
for (byte1 = 0; byte1 < num; byte1++) {
crc ^= (dat[byte1]);
for (i = 0; i < 8; i++) {
if (crc & 0x80) {
crc = (crc << 1) ^ 0x31;
} else {
crc = (crc << 1);
}
}
}
return crc;
}
} // namespace ags10
} // namespace esphome

View File

@@ -1,9 +1,9 @@
#pragma once
#include "esphome/components/i2c/i2c.h"
#include "esphome/components/sensor/sensor.h"
#include "esphome/core/automation.h"
#include "esphome/core/component.h"
#include "esphome/components/sensor/sensor.h"
#include "esphome/components/i2c/i2c.h"
namespace esphome {
namespace ags10 {
@@ -99,13 +99,23 @@ class AGS10Component : public PollingComponent, public i2c::I2CDevice {
* Read, checks and returns data from the sensor.
*/
template<size_t N> optional<std::array<uint8_t, N>> read_and_check_(uint8_t a_register);
/**
* Calculates CRC8 value.
*
* CRC8 calculation, initial value: 0xFF, polynomial: 0x31 (x8+ x5+ x4+1)
*
* @param[in] dat the data buffer
* @param num number of bytes in the buffer
*/
template<size_t N> uint8_t calc_crc8_(std::array<uint8_t, N> dat, uint8_t num);
};
template<typename... Ts> class AGS10NewI2cAddressAction : public Action<Ts...>, public Parented<AGS10Component> {
public:
TEMPLATABLE_VALUE(uint8_t, new_address)
void play(const Ts &...x) override { this->parent_->new_i2c_address(this->new_address_.value(x...)); }
void play(Ts... x) override { this->parent_->new_i2c_address(this->new_address_.value(x...)); }
};
enum AGS10SetZeroPointActionMode {
@@ -122,7 +132,7 @@ template<typename... Ts> class AGS10SetZeroPointAction : public Action<Ts...>, p
TEMPLATABLE_VALUE(uint16_t, value)
TEMPLATABLE_VALUE(AGS10SetZeroPointActionMode, mode)
void play(const Ts &...x) override {
void play(Ts... x) override {
switch (this->mode_.value(x...)) {
case FACTORY_DEFAULT:
this->parent_->set_zero_point_with_factory_defaults();

View File

@@ -83,7 +83,7 @@ void AHT10Component::setup() {
void AHT10Component::restart_read_() {
if (this->read_count_ == AHT10_ATTEMPTS) {
this->read_count_ = 0;
this->status_set_error(LOG_STR("Reading timed out"));
this->status_set_error("Reading timed out");
return;
}
this->read_count_++;
@@ -96,7 +96,7 @@ void AHT10Component::read_data_() {
ESP_LOGD(TAG, "Read attempt %d at %ums", this->read_count_, (unsigned) (millis() - this->start_time_));
}
if (this->read(data, 6) != i2c::ERROR_OK) {
this->status_set_warning(LOG_STR("Read failed, will retry"));
this->status_set_warning("Read failed, will retry");
this->restart_read_();
return;
}
@@ -113,7 +113,7 @@ void AHT10Component::read_data_() {
} else {
ESP_LOGD(TAG, "Invalid humidity, retrying");
if (this->write(AHT10_MEASURE_CMD, sizeof(AHT10_MEASURE_CMD)) != i2c::ERROR_OK) {
this->status_set_warning(LOG_STR(ESP_LOG_MSG_COMM_FAIL));
this->status_set_warning(ESP_LOG_MSG_COMM_FAIL);
}
this->restart_read_();
return;
@@ -144,7 +144,7 @@ void AHT10Component::update() {
return;
this->start_time_ = millis();
if (this->write(AHT10_MEASURE_CMD, sizeof(AHT10_MEASURE_CMD)) != i2c::ERROR_OK) {
this->status_set_warning(LOG_STR(ESP_LOG_MSG_COMM_FAIL));
this->status_set_warning(ESP_LOG_MSG_COMM_FAIL);
return;
}
this->restart_read_();

View File

@@ -13,7 +13,7 @@ template<typename... Ts> class SetAutoMuteAction : public Action<Ts...> {
TEMPLATABLE_VALUE(uint8_t, auto_mute_mode)
void play(const Ts &...x) override { this->aic3204_->set_auto_mute_mode(this->auto_mute_mode_.value(x...)); }
void play(Ts... x) override { this->aic3204_->set_auto_mute_mode(this->auto_mute_mode_.value(x...)); }
protected:
AIC3204 *aic3204_;

View File

@@ -18,6 +18,6 @@ CONFIG_SCHEMA = cv.Schema(
).extend(esp32_ble_tracker.ESP_BLE_DEVICE_SCHEMA)
async def to_code(config):
def to_code(config):
var = cg.new_Pvariable(config[CONF_ID])
await esp32_ble_tracker.register_ble_device(var, config)
yield esp32_ble_tracker.register_ble_device(var, config)

View File

@@ -20,8 +20,7 @@ bool AirthingsListener::parse_device(const esp32_ble_tracker::ESPBTDevice &devic
sn |= ((uint32_t) it.data[2] << 16);
sn |= ((uint32_t) it.data[3] << 24);
char addr_buf[MAC_ADDRESS_PRETTY_BUFFER_SIZE];
ESP_LOGD(TAG, "Found AirThings device Serial:%" PRIu32 " (MAC: %s)", sn, device.address_str_to(addr_buf));
ESP_LOGD(TAG, "Found AirThings device Serial:%" PRIu32 " (MAC: %s)", sn, device.address_str().c_str());
return true;
}
}

View File

@@ -1,5 +1,4 @@
#include "airthings_wave_base.h"
#include "esphome/components/esp32_ble/ble_uuid.h"
// All information related to reading battery information came from the sensors.airthings_wave
// project by Sverre Hamre (https://github.com/sverrham/sensor.airthings_wave)
@@ -94,10 +93,8 @@ void AirthingsWaveBase::update() {
bool AirthingsWaveBase::request_read_values_() {
auto *chr = this->parent()->get_characteristic(this->service_uuid_, this->sensors_data_characteristic_uuid_);
if (chr == nullptr) {
char service_buf[esp32_ble::UUID_STR_LEN];
char char_buf[esp32_ble::UUID_STR_LEN];
ESP_LOGW(TAG, "No sensor characteristic found at service %s char %s", this->service_uuid_.to_str(service_buf),
this->sensors_data_characteristic_uuid_.to_str(char_buf));
ESP_LOGW(TAG, "No sensor characteristic found at service %s char %s", this->service_uuid_.to_string().c_str(),
this->sensors_data_characteristic_uuid_.to_string().c_str());
return false;
}
@@ -120,20 +117,17 @@ bool AirthingsWaveBase::request_battery_() {
auto *chr = this->parent()->get_characteristic(this->service_uuid_, this->access_control_point_characteristic_uuid_);
if (chr == nullptr) {
char service_buf[esp32_ble::UUID_STR_LEN];
char char_buf[esp32_ble::UUID_STR_LEN];
ESP_LOGW(TAG, "No access control point characteristic found at service %s char %s",
this->service_uuid_.to_str(service_buf), this->access_control_point_characteristic_uuid_.to_str(char_buf));
this->service_uuid_.to_string().c_str(),
this->access_control_point_characteristic_uuid_.to_string().c_str());
return false;
}
auto *descr = this->parent()->get_descriptor(this->service_uuid_, this->access_control_point_characteristic_uuid_,
CLIENT_CHARACTERISTIC_CONFIGURATION_DESCRIPTOR_UUID);
if (descr == nullptr) {
char service_buf[esp32_ble::UUID_STR_LEN];
char char_buf[esp32_ble::UUID_STR_LEN];
ESP_LOGW(TAG, "No CCC descriptor found at service %s char %s", this->service_uuid_.to_str(service_buf),
this->access_control_point_characteristic_uuid_.to_str(char_buf));
ESP_LOGW(TAG, "No CCC descriptor found at service %s char %s", this->service_uuid_.to_string().c_str(),
this->access_control_point_characteristic_uuid_.to_string().c_str());
return false;
}

View File

@@ -13,7 +13,7 @@ from esphome.const import (
CONF_TRIGGER_ID,
CONF_WEB_SERVER,
)
from esphome.core import CORE, CoroPriority, coroutine_with_priority
from esphome.core import CORE, coroutine_with_priority
from esphome.core.entity_helpers import entity_duplicate_validator, setup_entity
from esphome.cpp_generator import MockObjClass
@@ -172,6 +172,12 @@ def alarm_control_panel_schema(
return _ALARM_CONTROL_PANEL_SCHEMA.extend(schema)
# Remove before 2025.11.0
ALARM_CONTROL_PANEL_SCHEMA = alarm_control_panel_schema(AlarmControlPanel)
ALARM_CONTROL_PANEL_SCHEMA.add_extra(
cv.deprecated_schema_constant("alarm_control_panel")
)
ALARM_CONTROL_PANEL_ACTION_SCHEMA = maybe_simple_id(
{
cv.GenerateID(): cv.use_id(AlarmControlPanel),
@@ -295,7 +301,8 @@ async def alarm_action_disarm_to_code(config, action_id, template_arg, args):
)
async def alarm_action_pending_to_code(config, action_id, template_arg, args):
paren = await cg.get_variable(config[CONF_ID])
return cg.new_Pvariable(action_id, template_arg, paren)
var = cg.new_Pvariable(action_id, template_arg, paren)
return var
@automation.register_action(
@@ -303,7 +310,8 @@ async def alarm_action_pending_to_code(config, action_id, template_arg, args):
)
async def alarm_action_trigger_to_code(config, action_id, template_arg, args):
paren = await cg.get_variable(config[CONF_ID])
return cg.new_Pvariable(action_id, template_arg, paren)
var = cg.new_Pvariable(action_id, template_arg, paren)
return var
@automation.register_action(
@@ -311,7 +319,8 @@ async def alarm_action_trigger_to_code(config, action_id, template_arg, args):
)
async def alarm_action_chime_to_code(config, action_id, template_arg, args):
paren = await cg.get_variable(config[CONF_ID])
return cg.new_Pvariable(action_id, template_arg, paren)
var = cg.new_Pvariable(action_id, template_arg, paren)
return var
@automation.register_action(
@@ -324,7 +333,8 @@ async def alarm_action_chime_to_code(config, action_id, template_arg, args):
)
async def alarm_action_ready_to_code(config, action_id, template_arg, args):
paren = await cg.get_variable(config[CONF_ID])
return cg.new_Pvariable(action_id, template_arg, paren)
var = cg.new_Pvariable(action_id, template_arg, paren)
return var
@automation.register_condition(
@@ -339,6 +349,7 @@ async def alarm_control_panel_is_armed_to_code(
return cg.new_Pvariable(condition_id, template_arg, paren)
@coroutine_with_priority(CoroPriority.CORE)
@coroutine_with_priority(100.0)
async def to_code(config):
cg.add_global(alarm_control_panel_ns.using)
cg.add_define("USE_ALARM_CONTROL_PANEL")

View File

@@ -1,14 +1,13 @@
#include "alarm_control_panel.h"
#include "esphome/core/defines.h"
#include "esphome/core/controller_registry.h"
#include <utility>
#include "alarm_control_panel.h"
#include "esphome/core/application.h"
#include "esphome/core/helpers.h"
#include "esphome/core/log.h"
namespace esphome::alarm_control_panel {
namespace esphome {
namespace alarm_control_panel {
static const char *const TAG = "alarm_control_panel";
@@ -31,16 +30,26 @@ void AlarmControlPanel::publish_state(AlarmControlPanelState state) {
this->last_update_ = millis();
if (state != this->current_state_) {
auto prev_state = this->current_state_;
ESP_LOGD(TAG, "'%s' >> %s (was %s)", this->get_name().c_str(),
LOG_STR_ARG(alarm_control_panel_state_to_string(state)),
ESP_LOGD(TAG, "Set state to: %s, previous: %s", LOG_STR_ARG(alarm_control_panel_state_to_string(state)),
LOG_STR_ARG(alarm_control_panel_state_to_string(prev_state)));
this->current_state_ = state;
// Single state callback - triggers check get_state() for specific states
this->state_callback_.call();
#if defined(USE_ALARM_CONTROL_PANEL) && defined(USE_CONTROLLER_REGISTRY)
ControllerRegistry::notify_alarm_control_panel_update(this);
#endif
// Cleared fires when leaving TRIGGERED state
if (state == ACP_STATE_TRIGGERED) {
this->triggered_callback_.call();
} else if (state == ACP_STATE_ARMING) {
this->arming_callback_.call();
} else if (state == ACP_STATE_PENDING) {
this->pending_callback_.call();
} else if (state == ACP_STATE_ARMED_HOME) {
this->armed_home_callback_.call();
} else if (state == ACP_STATE_ARMED_NIGHT) {
this->armed_night_callback_.call();
} else if (state == ACP_STATE_ARMED_AWAY) {
this->armed_away_callback_.call();
} else if (state == ACP_STATE_DISARMED) {
this->disarmed_callback_.call();
}
if (prev_state == ACP_STATE_TRIGGERED) {
this->cleared_callback_.call();
}
@@ -55,6 +64,34 @@ void AlarmControlPanel::add_on_state_callback(std::function<void()> &&callback)
this->state_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_triggered_callback(std::function<void()> &&callback) {
this->triggered_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_arming_callback(std::function<void()> &&callback) {
this->arming_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_armed_home_callback(std::function<void()> &&callback) {
this->armed_home_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_armed_night_callback(std::function<void()> &&callback) {
this->armed_night_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_armed_away_callback(std::function<void()> &&callback) {
this->armed_away_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_pending_callback(std::function<void()> &&callback) {
this->pending_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_disarmed_callback(std::function<void()> &&callback) {
this->disarmed_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_cleared_callback(std::function<void()> &&callback) {
this->cleared_callback_.add(std::move(callback));
}
@@ -115,4 +152,5 @@ void AlarmControlPanel::disarm(optional<std::string> code) {
call.perform();
}
} // namespace esphome::alarm_control_panel
} // namespace alarm_control_panel
} // namespace esphome

View File

@@ -1,5 +1,7 @@
#pragma once
#include <map>
#include "alarm_control_panel_call.h"
#include "alarm_control_panel_state.h"
@@ -7,7 +9,8 @@
#include "esphome/core/entity_base.h"
#include "esphome/core/log.h"
namespace esphome::alarm_control_panel {
namespace esphome {
namespace alarm_control_panel {
enum AlarmControlPanelFeature : uint8_t {
// Matches Home Assistant values
@@ -32,13 +35,54 @@ class AlarmControlPanel : public EntityBase {
*/
void publish_state(AlarmControlPanelState state);
/** Add a callback for when the state of the alarm_control_panel changes.
* Triggers can check get_state() to determine the new state.
/** Add a callback for when the state of the alarm_control_panel changes
*
* @param callback The callback function
*/
void add_on_state_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel chanes to triggered
*
* @param callback The callback function
*/
void add_on_triggered_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel chanes to arming
*
* @param callback The callback function
*/
void add_on_arming_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel changes to pending
*
* @param callback The callback function
*/
void add_on_pending_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel changes to armed_home
*
* @param callback The callback function
*/
void add_on_armed_home_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel changes to armed_night
*
* @param callback The callback function
*/
void add_on_armed_night_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel changes to armed_away
*
* @param callback The callback function
*/
void add_on_armed_away_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel changes to disarmed
*
* @param callback The callback function
*/
void add_on_disarmed_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel clears from triggered
*
* @param callback The callback function
@@ -128,14 +172,29 @@ class AlarmControlPanel : public EntityBase {
uint32_t last_update_;
// the call control function
virtual void control(const AlarmControlPanelCall &call) = 0;
// state callback - triggers check get_state() for specific state
LazyCallbackManager<void()> state_callback_{};
// clear callback - fires when leaving TRIGGERED state
LazyCallbackManager<void()> cleared_callback_{};
// state callback
CallbackManager<void()> state_callback_{};
// trigger callback
CallbackManager<void()> triggered_callback_{};
// arming callback
CallbackManager<void()> arming_callback_{};
// pending callback
CallbackManager<void()> pending_callback_{};
// armed_home callback
CallbackManager<void()> armed_home_callback_{};
// armed_night callback
CallbackManager<void()> armed_night_callback_{};
// armed_away callback
CallbackManager<void()> armed_away_callback_{};
// disarmed callback
CallbackManager<void()> disarmed_callback_{};
// clear callback
CallbackManager<void()> cleared_callback_{};
// chime callback
LazyCallbackManager<void()> chime_callback_{};
CallbackManager<void()> chime_callback_{};
// ready callback
LazyCallbackManager<void()> ready_callback_{};
CallbackManager<void()> ready_callback_{};
};
} // namespace esphome::alarm_control_panel
} // namespace alarm_control_panel
} // namespace esphome

View File

@@ -4,7 +4,8 @@
#include "esphome/core/log.h"
namespace esphome::alarm_control_panel {
namespace esphome {
namespace alarm_control_panel {
static const char *const TAG = "alarm_control_panel";
@@ -98,4 +99,5 @@ void AlarmControlPanelCall::perform() {
}
}
} // namespace esphome::alarm_control_panel
} // namespace alarm_control_panel
} // namespace esphome

View File

@@ -6,7 +6,8 @@
#include "esphome/core/helpers.h"
namespace esphome::alarm_control_panel {
namespace esphome {
namespace alarm_control_panel {
class AlarmControlPanel;
@@ -35,4 +36,5 @@ class AlarmControlPanelCall {
void validate_();
};
} // namespace esphome::alarm_control_panel
} // namespace alarm_control_panel
} // namespace esphome

View File

@@ -1,6 +1,7 @@
#include "alarm_control_panel_state.h"
namespace esphome::alarm_control_panel {
namespace esphome {
namespace alarm_control_panel {
const LogString *alarm_control_panel_state_to_string(AlarmControlPanelState state) {
switch (state) {
@@ -29,4 +30,5 @@ const LogString *alarm_control_panel_state_to_string(AlarmControlPanelState stat
}
}
} // namespace esphome::alarm_control_panel
} // namespace alarm_control_panel
} // namespace esphome

View File

@@ -3,7 +3,8 @@
#include <cstdint>
#include "esphome/core/log.h"
namespace esphome::alarm_control_panel {
namespace esphome {
namespace alarm_control_panel {
enum AlarmControlPanelState : uint8_t {
ACP_STATE_DISARMED = 0,
@@ -24,4 +25,5 @@ enum AlarmControlPanelState : uint8_t {
*/
const LogString *alarm_control_panel_state_to_string(AlarmControlPanelState state);
} // namespace esphome::alarm_control_panel
} // namespace alarm_control_panel
} // namespace esphome

View File

@@ -3,9 +3,9 @@
#include "esphome/core/automation.h"
#include "alarm_control_panel.h"
namespace esphome::alarm_control_panel {
namespace esphome {
namespace alarm_control_panel {
/// Trigger on any state change
class StateTrigger : public Trigger<> {
public:
explicit StateTrigger(AlarmControlPanel *alarm_control_panel) {
@@ -13,30 +13,55 @@ class StateTrigger : public Trigger<> {
}
};
/// Template trigger that fires when entering a specific state
template<AlarmControlPanelState State> class StateEnterTrigger : public Trigger<> {
class TriggeredTrigger : public Trigger<> {
public:
explicit StateEnterTrigger(AlarmControlPanel *alarm_control_panel) : alarm_control_panel_(alarm_control_panel) {
alarm_control_panel->add_on_state_callback([this]() {
if (this->alarm_control_panel_->get_state() == State)
this->trigger();
});
explicit TriggeredTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_triggered_callback([this]() { this->trigger(); });
}
protected:
AlarmControlPanel *alarm_control_panel_;
};
// Type aliases for state-specific triggers
using TriggeredTrigger = StateEnterTrigger<ACP_STATE_TRIGGERED>;
using ArmingTrigger = StateEnterTrigger<ACP_STATE_ARMING>;
using PendingTrigger = StateEnterTrigger<ACP_STATE_PENDING>;
using ArmedHomeTrigger = StateEnterTrigger<ACP_STATE_ARMED_HOME>;
using ArmedNightTrigger = StateEnterTrigger<ACP_STATE_ARMED_NIGHT>;
using ArmedAwayTrigger = StateEnterTrigger<ACP_STATE_ARMED_AWAY>;
using DisarmedTrigger = StateEnterTrigger<ACP_STATE_DISARMED>;
class ArmingTrigger : public Trigger<> {
public:
explicit ArmingTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_arming_callback([this]() { this->trigger(); });
}
};
class PendingTrigger : public Trigger<> {
public:
explicit PendingTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_pending_callback([this]() { this->trigger(); });
}
};
class ArmedHomeTrigger : public Trigger<> {
public:
explicit ArmedHomeTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_armed_home_callback([this]() { this->trigger(); });
}
};
class ArmedNightTrigger : public Trigger<> {
public:
explicit ArmedNightTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_armed_night_callback([this]() { this->trigger(); });
}
};
class ArmedAwayTrigger : public Trigger<> {
public:
explicit ArmedAwayTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_armed_away_callback([this]() { this->trigger(); });
}
};
class DisarmedTrigger : public Trigger<> {
public:
explicit DisarmedTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_disarmed_callback([this]() { this->trigger(); });
}
};
/// Trigger when leaving TRIGGERED state (alarm cleared)
class ClearedTrigger : public Trigger<> {
public:
explicit ClearedTrigger(AlarmControlPanel *alarm_control_panel) {
@@ -44,7 +69,6 @@ class ClearedTrigger : public Trigger<> {
}
};
/// Trigger on chime event (zone opened while disarmed)
class ChimeTrigger : public Trigger<> {
public:
explicit ChimeTrigger(AlarmControlPanel *alarm_control_panel) {
@@ -52,7 +76,6 @@ class ChimeTrigger : public Trigger<> {
}
};
/// Trigger on ready state change
class ReadyTrigger : public Trigger<> {
public:
explicit ReadyTrigger(AlarmControlPanel *alarm_control_panel) {
@@ -66,7 +89,7 @@ template<typename... Ts> class ArmAwayAction : public Action<Ts...> {
TEMPLATABLE_VALUE(std::string, code)
void play(const Ts &...x) override {
void play(Ts... x) override {
auto call = this->alarm_control_panel_->make_call();
auto code = this->code_.optional_value(x...);
if (code.has_value()) {
@@ -86,7 +109,7 @@ template<typename... Ts> class ArmHomeAction : public Action<Ts...> {
TEMPLATABLE_VALUE(std::string, code)
void play(const Ts &...x) override {
void play(Ts... x) override {
auto call = this->alarm_control_panel_->make_call();
auto code = this->code_.optional_value(x...);
if (code.has_value()) {
@@ -106,7 +129,7 @@ template<typename... Ts> class ArmNightAction : public Action<Ts...> {
TEMPLATABLE_VALUE(std::string, code)
void play(const Ts &...x) override {
void play(Ts... x) override {
auto call = this->alarm_control_panel_->make_call();
auto code = this->code_.optional_value(x...);
if (code.has_value()) {
@@ -126,7 +149,7 @@ template<typename... Ts> class DisarmAction : public Action<Ts...> {
TEMPLATABLE_VALUE(std::string, code)
void play(const Ts &...x) override { this->alarm_control_panel_->disarm(this->code_.optional_value(x...)); }
void play(Ts... x) override { this->alarm_control_panel_->disarm(this->code_.optional_value(x...)); }
protected:
AlarmControlPanel *alarm_control_panel_;
@@ -136,7 +159,7 @@ template<typename... Ts> class PendingAction : public Action<Ts...> {
public:
explicit PendingAction(AlarmControlPanel *alarm_control_panel) : alarm_control_panel_(alarm_control_panel) {}
void play(const Ts &...x) override { this->alarm_control_panel_->make_call().pending().perform(); }
void play(Ts... x) override { this->alarm_control_panel_->make_call().pending().perform(); }
protected:
AlarmControlPanel *alarm_control_panel_;
@@ -146,7 +169,7 @@ template<typename... Ts> class TriggeredAction : public Action<Ts...> {
public:
explicit TriggeredAction(AlarmControlPanel *alarm_control_panel) : alarm_control_panel_(alarm_control_panel) {}
void play(const Ts &...x) override { this->alarm_control_panel_->make_call().triggered().perform(); }
void play(Ts... x) override { this->alarm_control_panel_->make_call().triggered().perform(); }
protected:
AlarmControlPanel *alarm_control_panel_;
@@ -155,7 +178,7 @@ template<typename... Ts> class TriggeredAction : public Action<Ts...> {
template<typename... Ts> class AlarmControlPanelCondition : public Condition<Ts...> {
public:
AlarmControlPanelCondition(AlarmControlPanel *parent) : parent_(parent) {}
bool check(const Ts &...x) override {
bool check(Ts... x) override {
return this->parent_->is_state_armed(this->parent_->get_state()) ||
this->parent_->get_state() == ACP_STATE_PENDING || this->parent_->get_state() == ACP_STATE_TRIGGERED;
}
@@ -164,4 +187,5 @@ template<typename... Ts> class AlarmControlPanelCondition : public Condition<Ts.
AlarmControlPanel *parent_;
};
} // namespace esphome::alarm_control_panel
} // namespace alarm_control_panel
} // namespace esphome

View File

@@ -56,13 +56,13 @@ bool Alpha3::is_current_response_type_(const uint8_t *response_type) {
void Alpha3::handle_geni_response_(const uint8_t *response, uint16_t length) {
if (this->response_offset_ >= this->response_length_) {
ESP_LOGD(TAG, "[%s] GENI response begin", this->parent_->address_str());
ESP_LOGD(TAG, "[%s] GENI response begin", this->parent_->address_str().c_str());
if (length < GENI_RESPONSE_HEADER_LENGTH) {
ESP_LOGW(TAG, "[%s] response too short", this->parent_->address_str());
ESP_LOGW(TAG, "[%s] response to short", this->parent_->address_str().c_str());
return;
}
if (response[0] != 36 || response[2] != 248 || response[3] != 231 || response[4] != 10) {
ESP_LOGW(TAG, "[%s] response bytes %d %d %d %d %d don't match GENI HEADER", this->parent_->address_str(),
ESP_LOGW(TAG, "[%s] response bytes %d %d %d %d %d don't match GENI HEADER", this->parent_->address_str().c_str(),
response[0], response[1], response[2], response[3], response[4]);
return;
}
@@ -77,11 +77,11 @@ void Alpha3::handle_geni_response_(const uint8_t *response, uint16_t length) {
};
if (this->is_current_response_type_(GENI_RESPONSE_TYPE_FLOW_HEAD)) {
ESP_LOGD(TAG, "[%s] FLOW HEAD Response", this->parent_->address_str());
ESP_LOGD(TAG, "[%s] FLOW HEAD Response", this->parent_->address_str().c_str());
extract_publish_sensor_value(GENI_RESPONSE_FLOW_OFFSET, this->flow_sensor_, 3600.0F);
extract_publish_sensor_value(GENI_RESPONSE_HEAD_OFFSET, this->head_sensor_, .0001F);
} else if (this->is_current_response_type_(GENI_RESPONSE_TYPE_POWER)) {
ESP_LOGD(TAG, "[%s] POWER Response", this->parent_->address_str());
ESP_LOGD(TAG, "[%s] POWER Response", this->parent_->address_str().c_str());
extract_publish_sensor_value(GENI_RESPONSE_POWER_OFFSET, this->power_sensor_, 1.0F);
extract_publish_sensor_value(GENI_RESPONSE_CURRENT_OFFSET, this->current_sensor_, 1.0F);
extract_publish_sensor_value(GENI_RESPONSE_MOTOR_SPEED_OFFSET, this->speed_sensor_, 1.0F);
@@ -100,7 +100,7 @@ void Alpha3::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc
if (param->open.status == ESP_GATT_OK) {
this->response_offset_ = 0;
this->response_length_ = 0;
ESP_LOGI(TAG, "[%s] connection open", this->parent_->address_str());
ESP_LOGI(TAG, "[%s] connection open", this->parent_->address_str().c_str());
}
break;
}
@@ -132,7 +132,7 @@ void Alpha3::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc
case ESP_GATTC_SEARCH_CMPL_EVT: {
auto *chr = this->parent_->get_characteristic(ALPHA3_GENI_SERVICE_UUID, ALPHA3_GENI_CHARACTERISTIC_UUID);
if (chr == nullptr) {
ESP_LOGE(TAG, "[%s] No GENI service found at device, not an Alpha3..?", this->parent_->address_str());
ESP_LOGE(TAG, "[%s] No GENI service found at device, not an Alpha3..?", this->parent_->address_str().c_str());
break;
}
auto status = esp_ble_gattc_register_for_notify(this->parent_->get_gattc_if(), this->parent_->get_remote_bda(),
@@ -164,12 +164,12 @@ void Alpha3::send_request_(uint8_t *request, size_t len) {
esp_ble_gattc_write_char(this->parent_->get_gattc_if(), this->parent_->get_conn_id(), this->geni_handle_, len,
request, ESP_GATT_WRITE_TYPE_NO_RSP, ESP_GATT_AUTH_REQ_NONE);
if (status)
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str(), status);
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str().c_str(), status);
}
void Alpha3::update() {
if (this->node_state != espbt::ClientState::ESTABLISHED) {
ESP_LOGW(TAG, "[%s] Cannot poll, not connected", this->parent_->address_str());
ESP_LOGW(TAG, "[%s] Cannot poll, not connected", this->parent_->address_str().c_str());
return;
}

View File

@@ -15,8 +15,10 @@ namespace alpha3 {
namespace espbt = esphome::esp32_ble_tracker;
static const espbt::ESPBTUUID ALPHA3_GENI_SERVICE_UUID = espbt::ESPBTUUID::from_uint16(0xfe5d);
static const espbt::ESPBTUUID ALPHA3_GENI_CHARACTERISTIC_UUID = espbt::ESPBTUUID::from_raw(
{0xa9, 0x7b, 0xb8, 0x85, 0x00, 0x1a, 0x28, 0xaa, 0x2a, 0x43, 0x6e, 0x03, 0xd1, 0xff, 0x9c, 0x85});
static const espbt::ESPBTUUID ALPHA3_GENI_CHARACTERISTIC_UUID =
espbt::ESPBTUUID::from_raw({static_cast<char>(0xa9), 0x7b, static_cast<char>(0xb8), static_cast<char>(0x85), 0x0,
0x1a, 0x28, static_cast<char>(0xaa), 0x2a, 0x43, 0x6e, 0x3, static_cast<char>(0xd1),
static_cast<char>(0xff), static_cast<char>(0x9c), static_cast<char>(0x85)});
static const int16_t GENI_RESPONSE_HEADER_LENGTH = 13;
static const size_t GENI_RESPONSE_TYPE_LENGTH = 8;

View File

@@ -29,6 +29,22 @@ namespace am2315c {
static const char *const TAG = "am2315c";
uint8_t AM2315C::crc8_(uint8_t *data, uint8_t len) {
uint8_t crc = 0xFF;
while (len--) {
crc ^= *data++;
for (uint8_t i = 0; i < 8; i++) {
if (crc & 0x80) {
crc <<= 1;
crc ^= 0x31;
} else {
crc <<= 1;
}
}
}
return crc;
}
bool AM2315C::reset_register_(uint8_t reg) {
// code based on demo code sent by www.aosong.com
// no further documentation.
@@ -70,7 +86,7 @@ bool AM2315C::convert_(uint8_t *data, float &humidity, float &temperature) {
humidity = raw * 9.5367431640625e-5;
raw = ((data[3] & 0x0F) << 16) | (data[4] << 8) | data[5];
temperature = raw * 1.9073486328125e-4 - 50;
return crc8(data, 6, 0xFF, 0x31, true) == data[6];
return this->crc8_(data, 6) == data[6];
}
void AM2315C::setup() {

View File

@@ -21,9 +21,9 @@
// SOFTWARE.
#pragma once
#include "esphome/components/i2c/i2c.h"
#include "esphome/components/sensor/sensor.h"
#include "esphome/core/component.h"
#include "esphome/components/sensor/sensor.h"
#include "esphome/components/i2c/i2c.h"
namespace esphome {
namespace am2315c {
@@ -39,6 +39,7 @@ class AM2315C : public PollingComponent, public i2c::I2CDevice {
void set_humidity_sensor(sensor::Sensor *humidity_sensor) { this->humidity_sensor_ = humidity_sensor; }
protected:
uint8_t crc8_(uint8_t *data, uint8_t len);
bool convert_(uint8_t *data, float &humidity, float &temperature);
bool reset_register_(uint8_t reg);

View File

@@ -44,9 +44,11 @@ void Am43::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc_i
auto *chr = this->parent_->get_characteristic(AM43_SERVICE_UUID, AM43_CHARACTERISTIC_UUID);
if (chr == nullptr) {
if (this->parent_->get_characteristic(AM43_TUYA_SERVICE_UUID, AM43_TUYA_CHARACTERISTIC_UUID) != nullptr) {
ESP_LOGE(TAG, "[%s] Detected a Tuya AM43 which is not supported, sorry.", this->parent_->address_str());
ESP_LOGE(TAG, "[%s] Detected a Tuya AM43 which is not supported, sorry.",
this->parent_->address_str().c_str());
} else {
ESP_LOGE(TAG, "[%s] No control service found at device, not an AM43..?", this->parent_->address_str());
ESP_LOGE(TAG, "[%s] No control service found at device, not an AM43..?",
this->parent_->address_str().c_str());
}
break;
}
@@ -80,7 +82,8 @@ void Am43::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc_i
this->char_handle_, packet->length, packet->data,
ESP_GATT_WRITE_TYPE_NO_RSP, ESP_GATT_AUTH_REQ_NONE);
if (status) {
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str(), status);
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str().c_str(),
status);
}
}
this->current_sensor_ = 0;
@@ -94,7 +97,7 @@ void Am43::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc_i
void Am43::update() {
if (this->node_state != espbt::ClientState::ESTABLISHED) {
ESP_LOGW(TAG, "[%s] Cannot poll, not connected", this->parent_->address_str());
ESP_LOGW(TAG, "[%s] Cannot poll, not connected", this->parent_->address_str().c_str());
return;
}
if (this->current_sensor_ == 0) {
@@ -104,7 +107,7 @@ void Am43::update() {
esp_ble_gattc_write_char(this->parent_->get_gattc_if(), this->parent_->get_conn_id(), this->char_handle_,
packet->length, packet->data, ESP_GATT_WRITE_TYPE_NO_RSP, ESP_GATT_AUTH_REQ_NONE);
if (status) {
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str(), status);
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str().c_str(), status);
}
}
this->current_sensor_++;

View File

@@ -12,11 +12,10 @@ void AnalogThresholdBinarySensor::setup() {
// TRUE state is defined to be when sensor is >= threshold
// so when undefined sensor value initialize to FALSE
if (std::isnan(sensor_value)) {
this->raw_state_ = false;
this->publish_initial_state(false);
} else {
this->raw_state_ = sensor_value >= (this->lower_threshold_.value() + this->upper_threshold_.value()) / 2.0f;
this->publish_initial_state(this->raw_state_);
this->publish_initial_state(sensor_value >=
(this->lower_threshold_.value() + this->upper_threshold_.value()) / 2.0f);
}
}
@@ -26,10 +25,8 @@ void AnalogThresholdBinarySensor::set_sensor(sensor::Sensor *analog_sensor) {
this->sensor_->add_on_state_callback([this](float sensor_value) {
// if there is an invalid sensor reading, ignore the change and keep the current state
if (!std::isnan(sensor_value)) {
// Use raw_state_ for hysteresis logic, not this->state which is post-filter
this->raw_state_ =
sensor_value >= (this->raw_state_ ? this->lower_threshold_.value() : this->upper_threshold_.value());
this->publish_state(this->raw_state_);
this->publish_state(sensor_value >=
(this->state ? this->lower_threshold_.value() : this->upper_threshold_.value()));
}
});
}

View File

@@ -20,7 +20,6 @@ class AnalogThresholdBinarySensor : public Component, public binary_sensor::Bina
sensor::Sensor *sensor_{nullptr};
TemplatableValue<float> upper_threshold_{};
TemplatableValue<float> lower_threshold_{};
bool raw_state_{false}; // Pre-filter state for hysteresis logic
};
} // namespace analog_threshold

View File

@@ -34,20 +34,17 @@ SetFrameAction = animation_ns.class_(
"AnimationSetFrameAction", automation.Action, cg.Parented.template(Animation_)
)
CONFIG_SCHEMA = cv.All(
espImage.IMAGE_SCHEMA.extend(
{
cv.Required(CONF_ID): cv.declare_id(Animation_),
cv.Optional(CONF_LOOP): cv.All(
{
cv.Optional(CONF_START_FRAME, default=0): cv.positive_int,
cv.Optional(CONF_END_FRAME): cv.positive_int,
cv.Optional(CONF_REPEAT): cv.positive_int,
}
),
},
),
espImage.validate_settings,
CONFIG_SCHEMA = espImage.IMAGE_SCHEMA.extend(
{
cv.Required(CONF_ID): cv.declare_id(Animation_),
cv.Optional(CONF_LOOP): cv.All(
{
cv.Optional(CONF_START_FRAME, default=0): cv.positive_int,
cv.Optional(CONF_END_FRAME): cv.positive_int,
cv.Optional(CONF_REPEAT): cv.positive_int,
}
),
},
)

View File

@@ -26,12 +26,12 @@ uint32_t Animation::get_animation_frame_count() const { return this->animation_f
int Animation::get_current_frame() const { return this->current_frame_; }
void Animation::next_frame() {
this->current_frame_++;
if (loop_count_ && static_cast<uint32_t>(this->current_frame_) == loop_end_frame_ &&
if (loop_count_ && this->current_frame_ == loop_end_frame_ &&
(this->loop_current_iteration_ < loop_count_ || loop_count_ < 0)) {
this->current_frame_ = loop_start_frame_;
this->loop_current_iteration_++;
}
if (static_cast<uint32_t>(this->current_frame_) >= animation_frame_count_) {
if (this->current_frame_ >= animation_frame_count_) {
this->loop_current_iteration_ = 1;
this->current_frame_ = 0;
}

View File

@@ -39,7 +39,7 @@ class Animation : public image::Image {
template<typename... Ts> class AnimationNextFrameAction : public Action<Ts...> {
public:
AnimationNextFrameAction(Animation *parent) : parent_(parent) {}
void play(const Ts &...x) override { this->parent_->next_frame(); }
void play(Ts... x) override { this->parent_->next_frame(); }
protected:
Animation *parent_;
@@ -48,7 +48,7 @@ template<typename... Ts> class AnimationNextFrameAction : public Action<Ts...> {
template<typename... Ts> class AnimationPrevFrameAction : public Action<Ts...> {
public:
AnimationPrevFrameAction(Animation *parent) : parent_(parent) {}
void play(const Ts &...x) override { this->parent_->prev_frame(); }
void play(Ts... x) override { this->parent_->prev_frame(); }
protected:
Animation *parent_;
@@ -58,7 +58,7 @@ template<typename... Ts> class AnimationSetFrameAction : public Action<Ts...> {
public:
AnimationSetFrameAction(Animation *parent) : parent_(parent) {}
TEMPLATABLE_VALUE(uint16_t, frame)
void play(const Ts &...x) override { this->parent_->set_frame(this->frame_.value(x...)); }
void play(Ts... x) override { this->parent_->set_frame(this->frame_.value(x...)); }
protected:
Animation *parent_;

View File

@@ -42,7 +42,7 @@ void Anova::control(const ClimateCall &call) {
esp_ble_gattc_write_char(this->parent_->get_gattc_if(), this->parent_->get_conn_id(), this->char_handle_,
pkt->length, pkt->data, ESP_GATT_WRITE_TYPE_NO_RSP, ESP_GATT_AUTH_REQ_NONE);
if (status) {
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str(), status);
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str().c_str(), status);
}
}
if (call.get_target_temperature().has_value()) {
@@ -51,7 +51,7 @@ void Anova::control(const ClimateCall &call) {
esp_ble_gattc_write_char(this->parent_->get_gattc_if(), this->parent_->get_conn_id(), this->char_handle_,
pkt->length, pkt->data, ESP_GATT_WRITE_TYPE_NO_RSP, ESP_GATT_AUTH_REQ_NONE);
if (status) {
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str(), status);
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str().c_str(), status);
}
}
}
@@ -67,10 +67,8 @@ void Anova::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc_
case ESP_GATTC_SEARCH_CMPL_EVT: {
auto *chr = this->parent_->get_characteristic(ANOVA_SERVICE_UUID, ANOVA_CHARACTERISTIC_UUID);
if (chr == nullptr) {
ESP_LOGW(TAG,
"[%s] No control service found at device, not an Anova..?\n"
"[%s] Note, this component does not currently support Anova Nano.",
this->get_name().c_str(), this->get_name().c_str());
ESP_LOGW(TAG, "[%s] No control service found at device, not an Anova..?", this->get_name().c_str());
ESP_LOGW(TAG, "[%s] Note, this component does not currently support Anova Nano.", this->get_name().c_str());
break;
}
this->char_handle_ = chr->handle;
@@ -126,7 +124,8 @@ void Anova::gattc_event_handler(esp_gattc_cb_event_t event, esp_gatt_if_t gattc_
esp_ble_gattc_write_char(this->parent_->get_gattc_if(), this->parent_->get_conn_id(), this->char_handle_,
pkt->length, pkt->data, ESP_GATT_WRITE_TYPE_NO_RSP, ESP_GATT_AUTH_REQ_NONE);
if (status) {
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str(), status);
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str().c_str(),
status);
}
}
}
@@ -151,7 +150,7 @@ void Anova::update() {
esp_ble_gattc_write_char(this->parent_->get_gattc_if(), this->parent_->get_conn_id(), this->char_handle_,
pkt->length, pkt->data, ESP_GATT_WRITE_TYPE_NO_RSP, ESP_GATT_AUTH_REQ_NONE);
if (status) {
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str(), status);
ESP_LOGW(TAG, "[%s] esp_ble_gattc_write_char failed, status=%d", this->parent_->address_str().c_str(), status);
}
this->current_request_++;
}

View File

@@ -28,7 +28,7 @@ class Anova : public climate::Climate, public esphome::ble_client::BLEClientNode
void dump_config() override;
climate::ClimateTraits traits() override {
auto traits = climate::ClimateTraits();
traits.add_feature_flags(climate::CLIMATE_SUPPORTS_CURRENT_TEMPERATURE);
traits.set_supports_current_temperature(true);
traits.set_supported_modes({climate::CLIMATE_MODE_OFF, climate::ClimateMode::CLIMATE_MODE_HEAT});
traits.set_visual_min_temperature(25.0);
traits.set_visual_max_temperature(100.0);

View File

@@ -1,97 +1,60 @@
import base64
import logging
from esphome import automation
from esphome.automation import Condition
import esphome.codegen as cg
from esphome.components.logger import request_log_listener
from esphome.config_helpers import get_logger_level
import esphome.config_validation as cv
from esphome.const import (
CONF_ACTION,
CONF_ACTIONS,
CONF_CAPTURE_RESPONSE,
CONF_DATA,
CONF_DATA_TEMPLATE,
CONF_EVENT,
CONF_ID,
CONF_KEY,
CONF_MAX_CONNECTIONS,
CONF_ON_CLIENT_CONNECTED,
CONF_ON_CLIENT_DISCONNECTED,
CONF_ON_ERROR,
CONF_ON_SUCCESS,
CONF_PASSWORD,
CONF_PORT,
CONF_REBOOT_TIMEOUT,
CONF_RESPONSE_TEMPLATE,
CONF_SERVICE,
CONF_SERVICES,
CONF_TAG,
CONF_THEN,
CONF_TRIGGER_ID,
CONF_VARIABLES,
)
from esphome.core import CORE, ID, CoroPriority, EsphomeError, coroutine_with_priority
from esphome.cpp_generator import MockObj, TemplateArgsType
from esphome.types import ConfigFragmentType, ConfigType
_LOGGER = logging.getLogger(__name__)
from esphome.core import CORE, coroutine_with_priority
DOMAIN = "api"
DEPENDENCIES = ["network"]
CODEOWNERS = ["@esphome/core"]
def AUTO_LOAD(config: ConfigType) -> list[str]:
"""Conditionally auto-load json only when capture_response is used."""
base = ["socket"]
# Check if any homeassistant.action/homeassistant.service has capture_response: true
# This flag is set during config validation in _validate_response_config
if not config or CORE.data.get(DOMAIN, {}).get(CONF_CAPTURE_RESPONSE, False):
return base + ["json"]
return base
AUTO_LOAD = ["socket"]
CODEOWNERS = ["@OttoWinter"]
api_ns = cg.esphome_ns.namespace("api")
APIServer = api_ns.class_("APIServer", cg.Component, cg.Controller)
HomeAssistantServiceCallAction = api_ns.class_(
"HomeAssistantServiceCallAction", automation.Action
)
ActionResponse = api_ns.class_("ActionResponse")
HomeAssistantActionResponseTrigger = api_ns.class_(
"HomeAssistantActionResponseTrigger", automation.Trigger
)
APIConnectedCondition = api_ns.class_("APIConnectedCondition", Condition)
APIRespondAction = api_ns.class_("APIRespondAction", automation.Action)
APIUnregisterServiceCallAction = api_ns.class_(
"APIUnregisterServiceCallAction", automation.Action
)
UserServiceTrigger = api_ns.class_("UserServiceTrigger", automation.Trigger)
ListEntitiesServicesArgument = api_ns.class_("ListEntitiesServicesArgument")
SERVICE_ARG_NATIVE_TYPES: dict[str, MockObj] = {
"bool": cg.bool_,
SERVICE_ARG_NATIVE_TYPES = {
"bool": bool,
"int": cg.int32,
"float": cg.float_,
"float": float,
"string": cg.std_string,
"bool[]": cg.FixedVector.template(cg.bool_).operator("const").operator("ref"),
"int[]": cg.FixedVector.template(cg.int32).operator("const").operator("ref"),
"float[]": cg.FixedVector.template(cg.float_).operator("const").operator("ref"),
"string[]": cg.FixedVector.template(cg.std_string)
.operator("const")
.operator("ref"),
"bool[]": cg.std_vector.template(bool),
"int[]": cg.std_vector.template(cg.int32),
"float[]": cg.std_vector.template(float),
"string[]": cg.std_vector.template(cg.std_string),
}
CONF_ENCRYPTION = "encryption"
CONF_BATCH_DELAY = "batch_delay"
CONF_CUSTOM_SERVICES = "custom_services"
CONF_HOMEASSISTANT_SERVICES = "homeassistant_services"
CONF_HOMEASSISTANT_STATES = "homeassistant_states"
CONF_LISTEN_BACKLOG = "listen_backlog"
CONF_MAX_SEND_QUEUE = "max_send_queue"
CONF_STATE_SUBSCRIPTION_ONLY = "state_subscription_only"
def validate_encryption_key(value):
@@ -108,85 +71,6 @@ def validate_encryption_key(value):
return value
CONF_SUPPORTS_RESPONSE = "supports_response"
# Enum values in api::enums namespace
enums_ns = api_ns.namespace("enums")
SUPPORTS_RESPONSE_OPTIONS = {
"none": enums_ns.SUPPORTS_RESPONSE_NONE,
"optional": enums_ns.SUPPORTS_RESPONSE_OPTIONAL,
"only": enums_ns.SUPPORTS_RESPONSE_ONLY,
"status": enums_ns.SUPPORTS_RESPONSE_STATUS,
}
def _auto_detect_supports_response(config: ConfigType) -> ConfigType:
"""Auto-detect supports_response based on api.respond usage in the action's then block.
- If api.respond with data found: set to "optional" (unless user explicitly set)
- If api.respond without data found: set to "status" (unless user explicitly set)
- If no api.respond found: set to "none" (unless user explicitly set)
"""
def scan_actions(items: ConfigFragmentType) -> tuple[bool, bool]:
"""Recursively scan actions for api.respond.
Returns: (found, has_data) tuple - has_data is True if ANY api.respond has data
"""
found_any = False
has_data_any = False
if isinstance(items, list):
for item in items:
found, has_data = scan_actions(item)
if found:
found_any = True
has_data_any = has_data_any or has_data
elif isinstance(items, dict):
# Check if this is an api.respond action
if "api.respond" in items:
respond_config = items["api.respond"]
has_data = isinstance(respond_config, dict) and "data" in respond_config
return True, has_data
# Recursively check all values
for value in items.values():
found, has_data = scan_actions(value)
if found:
found_any = True
has_data_any = has_data_any or has_data
return found_any, has_data_any
then = config.get(CONF_THEN, [])
action_name = config.get(CONF_ACTION)
found, has_data = scan_actions(then)
# If user explicitly set supports_response, validate and use that
if CONF_SUPPORTS_RESPONSE in config:
user_value = config[CONF_SUPPORTS_RESPONSE]
# Validate: "only" requires api.respond with data
if user_value == "only" and not has_data:
raise cv.Invalid(
f"Action '{action_name}' has supports_response=only but no api.respond "
"action with 'data:' was found. Use 'status' for responses without data, "
"or add 'data:' to your api.respond action."
)
return config
# Auto-detect based on api.respond usage
if found:
config[CONF_SUPPORTS_RESPONSE] = "optional" if has_data else "status"
else:
config[CONF_SUPPORTS_RESPONSE] = "none"
return config
def _validate_supports_response(value):
"""Validate supports_response after auto-detection has set the value."""
return cv.enum(SUPPORTS_RESPONSE_OPTIONS, lower=True)(value)
ACTIONS_SCHEMA = automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(UserServiceTrigger),
@@ -197,20 +81,10 @@ ACTIONS_SCHEMA = automation.validate_automation(
cv.validate_id_name: cv.one_of(*SERVICE_ARG_NATIVE_TYPES, lower=True),
}
),
# No default - auto-detected by _auto_detect_supports_response
cv.Optional(CONF_SUPPORTS_RESPONSE): cv.enum(
SUPPORTS_RESPONSE_OPTIONS, lower=True
),
},
cv.All(
cv.has_exactly_one_key(CONF_SERVICE, CONF_ACTION),
cv.rename_key(CONF_SERVICE, CONF_ACTION),
_auto_detect_supports_response,
# Re-validate supports_response after auto-detection sets it
cv.Schema(
{cv.Required(CONF_SUPPORTS_RESPONSE): _validate_supports_response},
extra=cv.ALLOW_EXTRA,
),
),
)
@@ -227,33 +101,12 @@ def _encryption_schema(config):
return ENCRYPTION_SCHEMA(config)
def _consume_api_sockets(config: ConfigType) -> ConfigType:
"""Register socket needs for API component."""
from esphome.components import socket
# API needs 1 listening socket + typically 3 concurrent client connections
# (not max_connections, which is the upper limit rarely reached)
sockets_needed = 1 + 3
socket.consume_sockets(sockets_needed, "api")(config)
return config
CONFIG_SCHEMA = cv.All(
cv.Schema(
{
cv.GenerateID(): cv.declare_id(APIServer),
cv.Optional(CONF_PORT, default=6053): cv.port,
# Removed in 2026.1.0 - kept to provide helpful error message
cv.Optional(CONF_PASSWORD): cv.invalid(
"The 'password' option has been removed in ESPHome 2026.1.0.\n"
"Password authentication was deprecated in May 2022.\n"
"Please migrate to encryption for secure API communication:\n\n"
"api:\n"
" encryption:\n"
" key: !secret api_encryption_key\n\n"
"Generate a key with: openssl rand -base64 32\n"
"Or visit https://esphome.io/components/api/#configuration-variables"
),
cv.Optional(CONF_PASSWORD, default=""): cv.string_strict,
cv.Optional(
CONF_REBOOT_TIMEOUT, default="15min"
): cv.positive_time_period_milliseconds,
@@ -275,77 +128,27 @@ CONFIG_SCHEMA = cv.All(
cv.Optional(CONF_ON_CLIENT_DISCONNECTED): automation.validate_automation(
single=True
),
# Connection limits to prevent memory exhaustion on resource-constrained devices
# Each connection uses ~500-1000 bytes of RAM plus system resources
# Platform defaults based on available RAM and network stack implementation:
cv.SplitDefault(
CONF_LISTEN_BACKLOG,
esp8266=1, # Limited RAM (~40KB free), LWIP raw sockets
esp32=4, # More RAM (520KB), BSD sockets
rp2040=1, # Limited RAM (264KB), LWIP raw sockets like ESP8266
bk72xx=4, # Moderate RAM, BSD-style sockets
rtl87xx=4, # Moderate RAM, BSD-style sockets
host=4, # Abundant resources
ln882x=4, # Moderate RAM
): cv.int_range(min=1, max=10),
cv.SplitDefault(
CONF_MAX_CONNECTIONS,
esp8266=4, # ~40KB free RAM, each connection uses ~500-1000 bytes
esp32=8, # 520KB RAM available
rp2040=4, # 264KB RAM but LWIP constraints
bk72xx=8, # Moderate RAM
rtl87xx=8, # Moderate RAM
host=8, # Abundant resources
ln882x=8, # Moderate RAM
): cv.int_range(min=1, max=20),
# Maximum queued send buffers per connection before dropping connection
# Each buffer uses ~8-12 bytes overhead plus actual message size
# Platform defaults based on available RAM and typical message rates:
cv.SplitDefault(
CONF_MAX_SEND_QUEUE,
esp8266=5, # Limited RAM, need to fail fast
esp32=8, # More RAM, can buffer more
rp2040=5, # Limited RAM
bk72xx=8, # Moderate RAM
nrf52=8, # Moderate RAM
rtl87xx=8, # Moderate RAM
host=16, # Abundant resources
ln882x=8, # Moderate RAM
): cv.int_range(min=1, max=64),
}
).extend(cv.COMPONENT_SCHEMA),
cv.rename_key(CONF_SERVICES, CONF_ACTIONS),
_consume_api_sockets,
)
@coroutine_with_priority(CoroPriority.WEB)
async def to_code(config: ConfigType) -> None:
@coroutine_with_priority(40.0)
async def to_code(config):
var = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(var, config)
# Track controller registration for StaticVector sizing
CORE.register_controller()
# Request a log listener slot for API log streaming
request_log_listener()
cg.add(var.set_port(config[CONF_PORT]))
if config[CONF_PASSWORD]:
cg.add_define("USE_API_PASSWORD")
cg.add(var.set_password(config[CONF_PASSWORD]))
cg.add(var.set_reboot_timeout(config[CONF_REBOOT_TIMEOUT]))
cg.add(var.set_batch_delay(config[CONF_BATCH_DELAY]))
if CONF_LISTEN_BACKLOG in config:
cg.add(var.set_listen_backlog(config[CONF_LISTEN_BACKLOG]))
if CONF_MAX_CONNECTIONS in config:
cg.add(var.set_max_connections(config[CONF_MAX_CONNECTIONS]))
cg.add_define("API_MAX_SEND_QUEUE", config[CONF_MAX_SEND_QUEUE])
# Set USE_API_USER_DEFINED_ACTIONS if any services are enabled
# Set USE_API_SERVICES if any services are enabled
if config.get(CONF_ACTIONS) or config[CONF_CUSTOM_SERVICES]:
cg.add_define("USE_API_USER_DEFINED_ACTIONS")
# Set USE_API_CUSTOM_SERVICES if external components need dynamic service registration
if config[CONF_CUSTOM_SERVICES]:
cg.add_define("USE_API_CUSTOM_SERVICES")
cg.add_define("USE_API_SERVICES")
if config[CONF_HOMEASSISTANT_SERVICES]:
cg.add_define("USE_API_HOMEASSISTANT_SERVICES")
@@ -354,66 +157,21 @@ async def to_code(config: ConfigType) -> None:
cg.add_define("USE_API_HOMEASSISTANT_STATES")
if actions := config.get(CONF_ACTIONS, []):
# Collect all triggers first, then register all at once with initializer_list
triggers: list[cg.Pvariable] = []
for conf in actions:
func_args: list[tuple[MockObj, str]] = []
service_template_args: list[MockObj] = [] # User service argument types
# Determine supports_response mode
# cv.enum returns the key with enum_value attribute containing the MockObj
supports_response_key = conf[CONF_SUPPORTS_RESPONSE]
supports_response = supports_response_key.enum_value
is_none = supports_response_key == "none"
is_optional = supports_response_key == "optional"
# Add call_id and return_response based on supports_response mode
# These must match the C++ Trigger template arguments
# - none: no extra args
# - status: call_id only (for reporting success/error without data)
# - only: call_id only (response always expected with data)
# - optional: call_id + return_response (client decides)
if not is_none:
# call_id is present for "optional", "only", and "status"
func_args.append((cg.uint32, "call_id"))
# return_response only present for "optional"
if is_optional:
func_args.append((cg.bool_, "return_response"))
service_arg_names: list[str] = []
template_args = []
func_args = []
service_arg_names = []
for name, var_ in conf[CONF_VARIABLES].items():
native = SERVICE_ARG_NATIVE_TYPES[var_]
service_template_args.append(native)
template_args.append(native)
func_args.append((native, name))
service_arg_names.append(name)
# Template args: supports_response mode, then user service arg types
templ = cg.TemplateArguments(supports_response, *service_template_args)
templ = cg.TemplateArguments(*template_args)
trigger = cg.new_Pvariable(
conf[CONF_TRIGGER_ID],
templ,
conf[CONF_ACTION],
service_arg_names,
conf[CONF_TRIGGER_ID], templ, conf[CONF_ACTION], service_arg_names
)
triggers.append(trigger)
auto = await automation.build_automation(trigger, func_args, conf)
# For non-none response modes, automatically append unregister action
# This ensures the call is unregistered after all actions complete (including async ones)
if not is_none:
arg_types = [arg[0] for arg in func_args]
action_templ = cg.TemplateArguments(*arg_types)
unregister_id = ID(
f"{conf[CONF_TRIGGER_ID]}__unregister",
is_declaration=True,
type=APIUnregisterServiceCallAction.template(action_templ),
)
unregister_action = cg.new_Pvariable(
unregister_id,
var,
)
cg.add(auto.add_actions([unregister_action]))
# Register all services at once - single allocation, no reallocations
cg.add(var.initialize_user_services(triggers))
cg.add(var.register_user_service(trigger))
await automation.build_automation(trigger, func_args, conf)
if CONF_ON_CLIENT_CONNECTED in config:
cg.add_define("USE_API_CLIENT_CONNECTED_TRIGGER")
@@ -435,7 +193,6 @@ async def to_code(config: ConfigType) -> None:
if key := encryption_config.get(CONF_KEY):
decoded = base64.b64decode(key)
cg.add(var.set_noise_psk(list(decoded)))
cg.add_define("USE_API_NOISE_PSK_FROM_YAML")
else:
# No key provided, but encryption desired
# This will allow a plaintext client to provide a noise key,
@@ -455,29 +212,6 @@ async def to_code(config: ConfigType) -> None:
KEY_VALUE_SCHEMA = cv.Schema({cv.string: cv.templatable(cv.string_strict)})
def _validate_response_config(config: ConfigType) -> ConfigType:
# Validate dependencies:
# - response_template requires capture_response: true
# - capture_response: true requires on_success
if CONF_RESPONSE_TEMPLATE in config and not config[CONF_CAPTURE_RESPONSE]:
raise cv.Invalid(
f"`{CONF_RESPONSE_TEMPLATE}` requires `{CONF_CAPTURE_RESPONSE}: true` to be set.",
path=[CONF_RESPONSE_TEMPLATE],
)
if config[CONF_CAPTURE_RESPONSE] and CONF_ON_SUCCESS not in config:
raise cv.Invalid(
f"`{CONF_CAPTURE_RESPONSE}: true` requires `{CONF_ON_SUCCESS}` to be set.",
path=[CONF_CAPTURE_RESPONSE],
)
# Track if any action uses capture_response for AUTO_LOAD
if config[CONF_CAPTURE_RESPONSE]:
CORE.data.setdefault(DOMAIN, {})[CONF_CAPTURE_RESPONSE] = True
return config
HOMEASSISTANT_ACTION_ACTION_SCHEMA = cv.All(
cv.Schema(
{
@@ -493,15 +227,10 @@ HOMEASSISTANT_ACTION_ACTION_SCHEMA = cv.All(
cv.Optional(CONF_VARIABLES, default={}): cv.Schema(
{cv.string: cv.returning_lambda}
),
cv.Optional(CONF_RESPONSE_TEMPLATE): cv.templatable(cv.string),
cv.Optional(CONF_CAPTURE_RESPONSE, default=False): cv.boolean,
cv.Optional(CONF_ON_SUCCESS): automation.validate_automation(single=True),
cv.Optional(CONF_ON_ERROR): automation.validate_automation(single=True),
}
),
cv.has_exactly_one_key(CONF_SERVICE, CONF_ACTION),
cv.rename_key(CONF_SERVICE, CONF_ACTION),
_validate_response_config,
)
@@ -515,67 +244,21 @@ HOMEASSISTANT_ACTION_ACTION_SCHEMA = cv.All(
HomeAssistantServiceCallAction,
HOMEASSISTANT_ACTION_ACTION_SCHEMA,
)
async def homeassistant_service_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
):
async def homeassistant_service_to_code(config, action_id, template_arg, args):
cg.add_define("USE_API_HOMEASSISTANT_SERVICES")
serv = await cg.get_variable(config[CONF_ID])
var = cg.new_Pvariable(action_id, template_arg, serv, False)
templ = await cg.templatable(config[CONF_ACTION], args, None)
cg.add(var.set_service(templ))
# Initialize FixedVectors with exact sizes from config
cg.add(var.init_data(len(config[CONF_DATA])))
for key, value in config[CONF_DATA].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_data(key, templ))
cg.add(var.init_data_template(len(config[CONF_DATA_TEMPLATE])))
for key, value in config[CONF_DATA_TEMPLATE].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_data_template(key, templ))
cg.add(var.init_variables(len(config[CONF_VARIABLES])))
for key, value in config[CONF_VARIABLES].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_variable(key, templ))
if on_error := config.get(CONF_ON_ERROR):
cg.add_define("USE_API_HOMEASSISTANT_ACTION_RESPONSES")
cg.add_define("USE_API_HOMEASSISTANT_ACTION_RESPONSES_ERRORS")
cg.add(var.set_wants_status())
await automation.build_automation(
var.get_error_trigger(),
[(cg.std_string, "error"), *args],
on_error,
)
if on_success := config.get(CONF_ON_SUCCESS):
cg.add_define("USE_API_HOMEASSISTANT_ACTION_RESPONSES")
cg.add(var.set_wants_status())
if config[CONF_CAPTURE_RESPONSE]:
cg.add(var.set_wants_response())
cg.add_define("USE_API_HOMEASSISTANT_ACTION_RESPONSES_JSON")
await automation.build_automation(
var.get_success_trigger_with_response(),
[(cg.JsonObjectConst, "response"), *args],
on_success,
)
if response_template := config.get(CONF_RESPONSE_TEMPLATE):
templ = await cg.templatable(response_template, args, cg.std_string)
cg.add(var.set_response_template(templ))
else:
await automation.build_automation(
var.get_success_trigger(),
args,
on_success,
)
return var
@@ -611,23 +294,15 @@ async def homeassistant_event_to_code(config, action_id, template_arg, args):
var = cg.new_Pvariable(action_id, template_arg, serv, True)
templ = await cg.templatable(config[CONF_EVENT], args, None)
cg.add(var.set_service(templ))
# Initialize FixedVectors with exact sizes from config
cg.add(var.init_data(len(config[CONF_DATA])))
for key, value in config[CONF_DATA].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_data(key, templ))
cg.add(var.init_data_template(len(config[CONF_DATA_TEMPLATE])))
for key, value in config[CONF_DATA_TEMPLATE].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_data_template(key, templ))
cg.add(var.init_variables(len(config[CONF_VARIABLES])))
for key, value in config[CONF_VARIABLES].items():
templ = await cg.templatable(value, args, None)
cg.add(var.add_variable(key, templ))
return var
@@ -646,109 +321,17 @@ HOMEASSISTANT_TAG_SCANNED_ACTION_SCHEMA = cv.maybe_simple_value(
HOMEASSISTANT_TAG_SCANNED_ACTION_SCHEMA,
)
async def homeassistant_tag_scanned_to_code(config, action_id, template_arg, args):
cg.add_define("USE_API_HOMEASSISTANT_SERVICES")
serv = await cg.get_variable(config[CONF_ID])
var = cg.new_Pvariable(action_id, template_arg, serv, True)
cg.add(var.set_service("esphome.tag_scanned"))
# Initialize FixedVector with exact size (1 data field)
cg.add(var.init_data(1))
templ = await cg.templatable(config[CONF_TAG], args, cg.std_string)
cg.add(var.add_data("tag_id", templ))
return var
CONF_SUCCESS = "success"
CONF_ERROR_MESSAGE = "error_message"
def _validate_api_respond_data(config):
"""Set flag during validation so AUTO_LOAD can include json component."""
if CONF_DATA in config:
CORE.data.setdefault(DOMAIN, {})[CONF_CAPTURE_RESPONSE] = True
return config
API_RESPOND_ACTION_SCHEMA = cv.All(
cv.Schema(
{
cv.GenerateID(): cv.use_id(APIServer),
cv.Optional(CONF_SUCCESS, default=True): cv.templatable(cv.boolean),
cv.Optional(CONF_ERROR_MESSAGE, default=""): cv.templatable(cv.string),
cv.Optional(CONF_DATA): cv.lambda_,
}
),
_validate_api_respond_data,
)
@automation.register_action(
"api.respond",
APIRespondAction,
API_RESPOND_ACTION_SCHEMA,
)
async def api_respond_to_code(
config: ConfigType,
action_id: ID,
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
# Validate that api.respond is used inside an API action context.
# We can't easily validate this at config time since the schema validation
# doesn't have access to the parent action context. Validating here in to_code
# is still much better than a cryptic C++ compile error.
has_call_id = any(name == "call_id" for _, name in args)
if not has_call_id:
raise EsphomeError(
"api.respond can only be used inside an API action's 'then:' block. "
"The 'call_id' variable is required to send a response."
)
cg.add_define("USE_API_USER_DEFINED_ACTION_RESPONSES")
serv = await cg.get_variable(config[CONF_ID])
var = cg.new_Pvariable(action_id, template_arg, serv)
# Check if we're in optional mode (has return_response arg)
is_optional = any(name == "return_response" for _, name in args)
if is_optional:
cg.add(var.set_is_optional_mode(True))
templ = await cg.templatable(config[CONF_SUCCESS], args, cg.bool_)
cg.add(var.set_success(templ))
templ = await cg.templatable(config[CONF_ERROR_MESSAGE], args, cg.std_string)
cg.add(var.set_error_message(templ))
if CONF_DATA in config:
cg.add_define("USE_API_USER_DEFINED_ACTION_RESPONSES_JSON")
# Lambda populates the JsonObject root - no return value needed
lambda_ = await cg.process_lambda(
config[CONF_DATA],
args + [(cg.JsonObject, "root")],
return_type=cg.void,
)
cg.add(var.set_data(lambda_))
return var
API_CONNECTED_CONDITION_SCHEMA = cv.Schema(
{
cv.GenerateID(): cv.use_id(APIServer),
cv.Optional(CONF_STATE_SUBSCRIPTION_ONLY, default=False): cv.templatable(
cv.boolean
),
}
)
@automation.register_condition(
"api.connected", APIConnectedCondition, API_CONNECTED_CONDITION_SCHEMA
)
@automation.register_condition("api.connected", APIConnectedCondition, {})
async def api_connected_to_code(config, condition_id, template_arg, args):
var = cg.new_Pvariable(condition_id, template_arg)
templ = await cg.templatable(config[CONF_STATE_SUBSCRIPTION_ONLY], args, cg.bool_)
cg.add(var.set_state_subscription_only(templ))
return var
return cg.new_Pvariable(condition_id, template_arg)
def FILTER_SOURCE_FILES() -> list[str]:

Some files were not shown because too many files have changed in this diff Show More