Compare commits

...

87 Commits

Author SHA1 Message Date
Jonathan Swoboda
a593965372 Merge pull request #12380 from esphome/bump-2025.11.5
2025.11.5
2025-12-09 12:54:30 -05:00
Jonathan Swoboda
4743e5592a Bump version to 2025.11.5 2025-12-09 12:02:53 -05:00
Jonathan Swoboda
464607011c [mqtt] Fix logger method case sensitivity error (#12379)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-09 12:02:53 -05:00
J. Nick Koston
16fe8f9e9e [libretiny] Fix WiFi scan timeout loop when scan fails (#12356) 2025-12-09 12:02:46 -05:00
J. Nick Koston
436d2c44e8 [wifi] Fix scan timeout loop when scan returns zero networks (#12354) 2025-12-09 12:01:51 -05:00
J. Nick Koston
b213555dd2 [scheduler] Fix missing lock when recycling items in defer queue processing (#12343) 2025-12-09 12:01:51 -05:00
Clyde Stubbs
b6336f9e63 [lvgl] Number saves value on interactive change (#12315) 2025-12-09 12:01:51 -05:00
Clyde Stubbs
fb7800a22f [binary_sensor] Fix reporting of 'unknown' (#12296)
Co-authored-by: Jonathan Swoboda <154711427+swoboda1337@users.noreply.github.com>
Co-authored-by: J. Nick Koston <nick@home-assistant.io>
2025-12-09 12:01:51 -05:00
Jonathan Swoboda
42811edeb4 Merge pull request #12293 from esphome/bump-2025.11.4
2025.11.4
2025-12-04 22:54:55 -05:00
Jonathan Swoboda
8f20abebf6 Bump version to 2025.11.4 2025-12-04 21:52:48 -05:00
J. Nick Koston
7077488dc7 [scheduler] Fix use-after-free when cancelling timeouts from non-main-loop threads (#12288) 2025-12-04 21:52:48 -05:00
Jesse Hills
ef34239064 [CI] Trigger generic version notifier job on release (#12292) 2025-12-04 21:52:48 -05:00
Jonathan Swoboda
44148c0c6b [esp32_hosted] Fix build and bump IDF component version to 2.7.0 (#12282)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-04 21:52:48 -05:00
Jonathan Swoboda
1b53fcf634 [es8311] Remove MIN and MAX from mic_gain enum options (#12281)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-04 21:52:48 -05:00
Clyde Stubbs
b18e3d943a [config] Provide path for has_at_most_one_of messages (#12277) 2025-12-04 21:52:48 -05:00
Jonathan Swoboda
f0673f6304 [ld2420] Add missing USE_SELECT ifdefs (#12275)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-04 21:52:48 -05:00
Clyde Stubbs
320ba30d50 [esp32] Add build flag to suppress noexecstack message (#12272) 2025-12-04 21:52:48 -05:00
Jonathan Swoboda
cfd88376b9 Merge pull request #12266 from esphome/bump-2025.11.3
2025.11.3
2025-12-03 11:36:57 -05:00
Jonathan Swoboda
577a6b2941 Bump version to 2025.11.3 2025-12-03 10:50:28 -05:00
Jonathan Swoboda
de68b56c4a [rtl87xx] Fix FreeRTOS version for RTL8720C boards (#12261)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 10:50:28 -05:00
Jonathan Swoboda
ccd23e692b [analog_threshold] Fix oscillation when using invert filter (#12251)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 10:50:28 -05:00
Jonathan Swoboda
1f5a44be3d [rtl87xx] Fix AsyncTCP compilation by upgrading FreeRTOS to 8.2.3 (#12230)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 10:50:28 -05:00
Jonathan Swoboda
1d1e47c757 [core] Fix clean all windows (#12217)
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: J. Nick Koston <nick@home-assistant.io>
2025-12-03 10:50:28 -05:00
3fbed1fa79 [ade7953] Apply voltage_gain setting to both channels (#12180) 2025-12-03 10:50:28 -05:00
Jonathan Swoboda
5c71520635 [mopeka_pro_check] Fix negative temperatures (#12198)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 10:50:28 -05:00
J. Nick Koston
9d6c81ec23 [hlk_fm22x] Fix Action::play method signatures (#12192) 2025-12-03 10:50:28 -05:00
Clyde Stubbs
73fa9230e6 [helpers] Add conversion from FixedVector to std::vector (#12179) 2025-12-03 10:50:28 -05:00
J. Nick Koston
48caff13c9 [espnow] Initialize LwIP stack when running without WiFi component (#12169) 2025-12-03 10:50:28 -05:00
J. Nick Koston
71bb94524e [usb_uart] Wake main loop immediately when USB data arrives (#12148) 2025-12-03 10:50:28 -05:00
Clyde Stubbs
a3199792c6 [build] Don't clear pio cache unless requested (#11966) 2025-12-03 10:50:28 -05:00
Jonathan Swoboda
50c1720c16 Merge pull request #12149 from esphome/bump-2025.11.2
2025.11.2
2025-11-27 18:19:05 -05:00
Jonathan Swoboda
4115dd7222 Bump version to 2025.11.2 2025-11-27 17:23:28 -05:00
J. Nick Koston
d5e2543751 [scheduler] Fix use-after-move crash in heap operations (#12124) 2025-11-27 17:23:28 -05:00
Clyde Stubbs
b4b34aee13 [wifi] Restore blocking setup until connected for RP2040 (#12142) 2025-11-27 17:23:28 -05:00
Jonathan Swoboda
6645994700 [esp32] Fix hosted update when there is no wifi (#12123) 2025-11-27 17:23:28 -05:00
Clyde Stubbs
ae140f52e3 [lvgl] Fix position of errors in widget config (#12111)
Co-authored-by: J. Nick Koston <nick@koston.org>
2025-11-27 17:23:28 -05:00
Clyde Stubbs
46ae6d35a2 [lvgl] Allow multiple widgets per grid cell (#12091) 2025-11-27 17:23:27 -05:00
J. Nick Koston
278f12fb99 [script] Fix script.wait hanging when triggered from on_boot (#12102) 2025-11-27 17:23:27 -05:00
Jonathan Swoboda
acdcd56395 [esp32] Fix platformio flash size print (#12099)
Co-authored-by: J. Nick Koston <nick+github@koston.org>
2025-11-27 17:23:27 -05:00
Edward Firmo
9289fc36f7 [nextion] Do not set alternative baud rate when not specified or <= 0 (#12097) 2025-11-27 17:23:27 -05:00
Jonathan Swoboda
3775b54554 Merge pull request #12086 from esphome/bump-2025.11.1
2025.11.1
2025-11-24 17:29:53 -05:00
Jonathan Swoboda
9186144dcd Bump version to 2025.11.1 2025-11-24 16:24:38 -05:00
Jesse Hills
25bcd0ea25 [online_image] Fix some large PNGs causing watchdog timeout (#12025)
Co-authored-by: guillempages <guillempages@users.noreply.github.com>
2025-11-24 16:24:38 -05:00
J. Nick Koston
50d08a2eba [esp_ldo,mipi_dsi,mipi_rgb] Fix dangling pointer bugs in mark_failed() (#12077) 2025-11-24 16:24:38 -05:00
J. Nick Koston
3a7a0c66ab [script][wait_until] Fix FIFO ordering and reentrancy bugs (#12049)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-24 16:24:38 -05:00
Jonathan Swoboda
83525b7a92 [core] Add support for passing yaml files to clean-all (#12039) 2025-11-24 16:24:38 -05:00
Jonathan Swoboda
f31f023c89 [esp32] Fix C2 builds (#12050) 2025-11-24 16:24:37 -05:00
J. Nick Koston
f8efefffaa [cst816][http_request] Fix status_set_error() dangling pointer bugs (#12033) 2025-11-24 16:24:37 -05:00
Jonathan Swoboda
d698083ede [jsn_sr04t] Fix model AJ_SR04M (#11992) 2025-11-24 16:24:37 -05:00
Jonathan Swoboda
11ba6440d7 [cst816][packet_transport][udp][wake_on_lan] Fix error messages (#12019) 2025-11-24 16:24:37 -05:00
Jonathan Swoboda
89ee37a2d5 [ltr501][ltr_als_ps] Rename enum to avoid collision with lwip defines (#12017) 2025-11-24 16:24:37 -05:00
J. Nick Koston
45b8c1e267 [network] Fix IPAddress constructor causing comparison failures and garbage output (#12005) 2025-11-24 16:24:37 -05:00
Jonathan Swoboda
fbe091f167 [graph] Fix legend border (#12000) 2025-11-24 16:24:37 -05:00
Jonathan Swoboda
625172e07d Merge pull request #12004 from esphome/bump-2025.11.0
2025.11.0
2025-11-19 17:37:42 -05:00
Jonathan Swoboda
1e9c7d3c6d Bump version to 2025.11.0 2025-11-19 16:02:52 -05:00
Jonathan Swoboda
c2bc7b3cdc Merge pull request #12003 from esphome/bump-2025.11.0b5
2025.11.0b5
2025-11-19 15:06:44 -05:00
Jonathan Swoboda
c75abfb894 Bump version to 2025.11.0b5 2025-11-19 14:17:03 -05:00
Jesse Hills
1157b4aee8 [epaper_spi] Add basic 7.3in-Spectra-E6 model (#12001) 2025-11-19 14:17:03 -05:00
J. Nick Koston
71dc2d374d [web_server_idf] Fix pbuf_free crash by moving shutdown before close (#11995) 2025-11-19 14:17:03 -05:00
Jonathan Swoboda
0a224f919b [wifi] Fix positive RSSI values on 8266 (#11994) 2025-11-19 14:17:03 -05:00
Jonathan Swoboda
7ef4b4f3d9 [text_sensor] Fix infinite loop in substitute filter (#11989)
Co-authored-by: J. Nick Koston <nick@koston.org>
2025-11-19 14:17:03 -05:00
J. Nick Koston
13b875c763 [tests] Fix SNTP time ID conflicts in component tests for grouped testing (#11990) 2025-11-19 14:17:03 -05:00
Jesse Hills
dfd614c00c Merge pull request #11980 from esphome/bump-2025.11.0b4
2025.11.0b4
2025-11-19 13:22:09 +13:00
Jesse Hills
2681a14d05 Bump version to 2025.11.0b4 2025-11-19 09:17:33 +13:00
J. Nick Koston
f436f6ee2e [wifi] Fix captive portal unusable when WiFi credentials are wrong (#11965) 2025-11-19 09:17:33 +13:00
Jonathan Swoboda
f18bc62690 [sfa30] Fix negative temperature values (#11973) 2025-11-19 09:17:33 +13:00
J. Nick Koston
6db73df649 [scheduler] Add defensive nullptr checks and explicit locking requirements (#11974) 2025-11-19 09:17:33 +13:00
Jonathan Swoboda
93215f1737 [esp32] Fix Arduino build on some ESP32 S2 boards (#11972) 2025-11-19 09:17:33 +13:00
Clyde Stubbs
70aa94b8a4 [lvgl] Apply scale to spinbox value (#11946) 2025-11-19 09:17:33 +13:00
strange_v
e8998a79c7 [mipi_rgb] Fix GUITION-4848S040 colors (#11709) 2025-11-19 09:17:33 +13:00
Jonathan Swoboda
3b25fdbc5f [core] Add support for setting environment variables (#11953) 2025-11-19 09:17:33 +13:00
J. Nick Koston
6c8577678c [captive_portal] Warn when enabled without WiFi AP configured (#11856) 2025-11-19 09:17:33 +13:00
Jesse Hills
70366d2124 Merge pull request #11944 from esphome/bump-2025.11.0b3
2025.11.0b3
2025-11-17 17:41:11 +13:00
Jesse Hills
a38c4e0c6e Bump version to 2025.11.0b3 2025-11-17 15:32:09 +13:00
Anton Sergunov
6c6b03bda0 [uart] Setup uart pins only if flags are set (#11914)
Co-authored-by: J. Nick Koston <nick+github@koston.org>
2025-11-17 15:32:09 +13:00
J. Nick Koston
9e02e31917 [web_server_idf] Fix lwIP assertion crash by shutting down sockets on connection close (#11937) 2025-11-17 15:32:09 +13:00
J. Nick Koston
3fd58f1a91 [web_server.ota] Merge multiple instances to prevent undefined behavior (#11905) 2025-11-17 15:32:09 +13:00
J. Nick Koston
9151489481 [sntp] Merge multiple instances to fix crash and undefined behavior (#11904) 2025-11-17 15:32:09 +13:00
J. Nick Koston
f19296ac7f [analyze-memory] Show all core symbols > 100 B instead of top 15 (#11909) 2025-11-17 15:32:09 +13:00
J. Nick Koston
36868ee7b1 [scheduler] Fix timing breakage after 49 days of uptime on ESP8266/RP2040 (#11924) 2025-11-17 15:32:09 +13:00
J. Nick Koston
d559f9f52e [ld2410] Add timeout filter to prevent stuck targets (#11920) 2025-11-17 15:32:09 +13:00
J. Nick Koston
6440b5fbf5 [ld2412] Fix stuck targets by adding timeout filter (#11919) 2025-11-17 15:32:09 +13:00
Jonathan Swoboda
97c4914573 [uart] Improve error handling and validate buffer size (#11895)
Co-authored-by: J. Nick Koston <nick+github@koston.org>
2025-11-17 15:32:09 +13:00
Edward Firmo
7ce94c27fe [wifi] Allow use_psram with Arduino (#11902) 2025-11-17 15:32:09 +13:00
Edward Firmo
eb54c0026d [light] Fix missing ColorMode::BRIGHTNESS case in logging (#11836) 2025-11-17 15:32:09 +13:00
Clyde Stubbs
fe00e209ff [esp32] Add sdkconfig flag to make OTA work for 32MB flash (#11883)
Co-authored-by: Jonathan Swoboda <154711427+swoboda1337@users.noreply.github.com>
2025-11-17 15:32:08 +13:00
Clyde Stubbs
aed80732f9 [esp32] Make esp-idf default framework for P4 (#11884) 2025-11-17 15:32:08 +13:00
108 changed files with 2613 additions and 404 deletions

View File

@@ -219,10 +219,19 @@ jobs:
- init
- deploy-manifest
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@7e473efe3cb98aa54f8d4bac15400b15fad77d94 # v2.2.0
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
owner: esphome
repositories: home-assistant-addon
- name: Trigger Workflow
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.DEPLOY_HA_ADDON_REPO_TOKEN }}
github-token: ${{ steps.generate-token.outputs.token }}
script: |
let description = "ESPHome";
if (context.eventName == "release") {
@@ -245,10 +254,19 @@ jobs:
needs: [init]
environment: ${{ needs.init.outputs.deploy_env }}
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@7e473efe3cb98aa54f8d4bac15400b15fad77d94 # v2.2.0
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
owner: esphome
repositories: esphome-schema
- name: Trigger Workflow
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.DEPLOY_ESPHOME_SCHEMA_REPO_TOKEN }}
github-token: ${{ steps.generate-token.outputs.token }}
script: |
github.rest.actions.createWorkflowDispatch({
owner: "esphome",
@@ -259,3 +277,34 @@ jobs:
version: "${{ needs.init.outputs.tag }}",
}
})
version-notifier:
if: github.repository == 'esphome/esphome' && needs.init.outputs.branch_build == 'false'
runs-on: ubuntu-latest
needs:
- init
- deploy-manifest
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@7e473efe3cb98aa54f8d4bac15400b15fad77d94 # v2.2.0
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
owner: esphome
repositories: version-notifier
- name: Trigger Workflow
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ steps.generate-token.outputs.token }}
script: |
github.rest.actions.createWorkflowDispatch({
owner: "esphome",
repo: "version-notifier",
workflow_id: "notify.yml",
ref: "main",
inputs: {
version: "${{ needs.init.outputs.tag }}",
}
})

View File

@@ -48,7 +48,7 @@ PROJECT_NAME = ESPHome
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = 2025.11.0b2
PROJECT_NUMBER = 2025.11.5
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a

View File

@@ -1319,7 +1319,7 @@ def parse_args(argv):
"clean-all", help="Clean all build and platform files."
)
parser_clean_all.add_argument(
"configuration", help="Your YAML configuration directory.", nargs="*"
"configuration", help="Your YAML file or configuration directory.", nargs="*"
)
parser_dashboard = subparsers.add_parser(

View File

@@ -15,6 +15,11 @@ from . import (
class MemoryAnalyzerCLI(MemoryAnalyzer):
"""Memory analyzer with CLI-specific report generation."""
# Symbol size threshold for detailed analysis
SYMBOL_SIZE_THRESHOLD: int = (
100 # Show symbols larger than this in detailed analysis
)
# Column width constants
COL_COMPONENT: int = 29
COL_FLASH_TEXT: int = 14
@@ -191,14 +196,21 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
f"{len(symbols):>{self.COL_CORE_COUNT}} | {percentage:>{self.COL_CORE_PERCENT - 1}.1f}%"
)
# Top 15 largest core symbols
# All core symbols above threshold
lines.append("")
lines.append(f"Top 15 Largest {_COMPONENT_CORE} Symbols:")
sorted_core_symbols = sorted(
self._esphome_core_symbols, key=lambda x: x[2], reverse=True
)
large_core_symbols = [
(symbol, demangled, size)
for symbol, demangled, size in sorted_core_symbols
if size > self.SYMBOL_SIZE_THRESHOLD
]
for i, (symbol, demangled, size) in enumerate(sorted_core_symbols[:15]):
lines.append(
f"{_COMPONENT_CORE} Symbols > {self.SYMBOL_SIZE_THRESHOLD} B ({len(large_core_symbols)} symbols):"
)
for i, (symbol, demangled, size) in enumerate(large_core_symbols):
lines.append(f"{i + 1}. {demangled} ({size:,} B)")
lines.append("=" * self.TABLE_WIDTH)
@@ -268,13 +280,15 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
lines.append(f"Total size: {comp_mem.flash_total:,} B")
lines.append("")
# Show all symbols > 100 bytes for better visibility
# Show all symbols above threshold for better visibility
large_symbols = [
(sym, dem, size) for sym, dem, size in sorted_symbols if size > 100
(sym, dem, size)
for sym, dem, size in sorted_symbols
if size > self.SYMBOL_SIZE_THRESHOLD
]
lines.append(
f"{comp_name} Symbols > 100 B ({len(large_symbols)} symbols):"
f"{comp_name} Symbols > {self.SYMBOL_SIZE_THRESHOLD} B ({len(large_symbols)} symbols):"
)
for i, (symbol, demangled, size) in enumerate(large_symbols):
lines.append(f"{i + 1}. {demangled} ({size:,} B)")

View File

@@ -25,7 +25,8 @@ void ADE7953::setup() {
this->ade_write_8(PGA_V_8, pga_v_);
this->ade_write_8(PGA_IA_8, pga_ia_);
this->ade_write_8(PGA_IB_8, pga_ib_);
this->ade_write_32(AVGAIN_32, vgain_);
this->ade_write_32(AVGAIN_32, avgain_);
this->ade_write_32(BVGAIN_32, bvgain_);
this->ade_write_32(AIGAIN_32, aigain_);
this->ade_write_32(BIGAIN_32, bigain_);
this->ade_write_32(AWGAIN_32, awgain_);
@@ -34,7 +35,8 @@ void ADE7953::setup() {
this->ade_read_8(PGA_V_8, &pga_v_);
this->ade_read_8(PGA_IA_8, &pga_ia_);
this->ade_read_8(PGA_IB_8, &pga_ib_);
this->ade_read_32(AVGAIN_32, &vgain_);
this->ade_read_32(AVGAIN_32, &avgain_);
this->ade_read_32(BVGAIN_32, &bvgain_);
this->ade_read_32(AIGAIN_32, &aigain_);
this->ade_read_32(BIGAIN_32, &bigain_);
this->ade_read_32(AWGAIN_32, &awgain_);
@@ -63,13 +65,14 @@ void ADE7953::dump_config() {
" PGA_V_8: 0x%X\n"
" PGA_IA_8: 0x%X\n"
" PGA_IB_8: 0x%X\n"
" VGAIN_32: 0x%08jX\n"
" AVGAIN_32: 0x%08jX\n"
" BVGAIN_32: 0x%08jX\n"
" AIGAIN_32: 0x%08jX\n"
" BIGAIN_32: 0x%08jX\n"
" AWGAIN_32: 0x%08jX\n"
" BWGAIN_32: 0x%08jX",
this->use_acc_energy_regs_, pga_v_, pga_ia_, pga_ib_, (uintmax_t) vgain_, (uintmax_t) aigain_,
(uintmax_t) bigain_, (uintmax_t) awgain_, (uintmax_t) bwgain_);
this->use_acc_energy_regs_, pga_v_, pga_ia_, pga_ib_, (uintmax_t) avgain_, (uintmax_t) bvgain_,
(uintmax_t) aigain_, (uintmax_t) bigain_, (uintmax_t) awgain_, (uintmax_t) bwgain_);
}
#define ADE_PUBLISH_(name, val, factor) \

View File

@@ -46,7 +46,12 @@ class ADE7953 : public PollingComponent, public sensor::Sensor {
void set_pga_ib(uint8_t pga_ib) { pga_ib_ = pga_ib; }
// Set input gains
void set_vgain(uint32_t vgain) { vgain_ = vgain; }
void set_vgain(uint32_t vgain) {
// Datasheet says: "to avoid discrepancies in other registers,
// if AVGAIN is set then BVGAIN should be set to the same value."
avgain_ = vgain;
bvgain_ = vgain;
}
void set_aigain(uint32_t aigain) { aigain_ = aigain; }
void set_bigain(uint32_t bigain) { bigain_ = bigain; }
void set_awgain(uint32_t awgain) { awgain_ = awgain; }
@@ -100,7 +105,8 @@ class ADE7953 : public PollingComponent, public sensor::Sensor {
uint8_t pga_v_;
uint8_t pga_ia_;
uint8_t pga_ib_;
uint32_t vgain_;
uint32_t avgain_;
uint32_t bvgain_;
uint32_t aigain_;
uint32_t bigain_;
uint32_t awgain_;

View File

@@ -12,10 +12,11 @@ void AnalogThresholdBinarySensor::setup() {
// TRUE state is defined to be when sensor is >= threshold
// so when undefined sensor value initialize to FALSE
if (std::isnan(sensor_value)) {
this->raw_state_ = false;
this->publish_initial_state(false);
} else {
this->publish_initial_state(sensor_value >=
(this->lower_threshold_.value() + this->upper_threshold_.value()) / 2.0f);
this->raw_state_ = sensor_value >= (this->lower_threshold_.value() + this->upper_threshold_.value()) / 2.0f;
this->publish_initial_state(this->raw_state_);
}
}
@@ -25,8 +26,10 @@ void AnalogThresholdBinarySensor::set_sensor(sensor::Sensor *analog_sensor) {
this->sensor_->add_on_state_callback([this](float sensor_value) {
// if there is an invalid sensor reading, ignore the change and keep the current state
if (!std::isnan(sensor_value)) {
this->publish_state(sensor_value >=
(this->state ? this->lower_threshold_.value() : this->upper_threshold_.value()));
// Use raw_state_ for hysteresis logic, not this->state which is post-filter
this->raw_state_ =
sensor_value >= (this->raw_state_ ? this->lower_threshold_.value() : this->upper_threshold_.value());
this->publish_state(this->raw_state_);
}
});
}

View File

@@ -20,6 +20,7 @@ class AnalogThresholdBinarySensor : public Component, public binary_sensor::Bina
sensor::Sensor *sensor_{nullptr};
TemplatableValue<float> upper_threshold_{};
TemplatableValue<float> lower_threshold_{};
bool raw_state_{false}; // Pre-filter state for hysteresis logic
};
} // namespace analog_threshold

View File

@@ -36,13 +36,20 @@ void BinarySensor::publish_initial_state(bool new_state) {
void BinarySensor::send_state_internal(bool new_state) {
// copy the new state to the visible property for backwards compatibility, before any callbacks
this->state = new_state;
// Note that set_state_ de-dups and will only trigger callbacks if the state has actually changed
if (this->set_state_(new_state)) {
ESP_LOGD(TAG, "'%s': New state is %s", this->get_name().c_str(), ONOFF(new_state));
// Note that set_new_state_ de-dups and will only trigger callbacks if the state has actually changed
this->set_new_state(new_state);
}
bool BinarySensor::set_new_state(const optional<bool> &new_state) {
if (StatefulEntityBase::set_new_state(new_state)) {
// weirdly, this file could be compiled even without USE_BINARY_SENSOR defined
#if defined(USE_BINARY_SENSOR) && defined(USE_CONTROLLER_REGISTRY)
ControllerRegistry::notify_binary_sensor_update(this);
#endif
ESP_LOGD(TAG, "'%s': %s", this->get_name().c_str(), ONOFFMAYBE(new_state));
return true;
}
return false;
}
void BinarySensor::add_filter(Filter *filter) {

View File

@@ -63,6 +63,8 @@ class BinarySensor : public StatefulEntityBase<bool>, public EntityBase_DeviceCl
protected:
Filter *filter_list_{nullptr};
bool set_new_state(const optional<bool> &new_state) override;
};
class BinarySensorInitiallyOff : public BinarySensor {

View File

@@ -1,9 +1,12 @@
import logging
import esphome.codegen as cg
from esphome.components import web_server_base
from esphome.components.web_server_base import CONF_WEB_SERVER_BASE_ID
from esphome.config_helpers import filter_source_files_from_platform
import esphome.config_validation as cv
from esphome.const import (
CONF_AP,
CONF_ID,
PLATFORM_BK72XX,
PLATFORM_ESP32,
@@ -14,6 +17,10 @@ from esphome.const import (
)
from esphome.core import CORE, coroutine_with_priority
from esphome.coroutine import CoroPriority
import esphome.final_validate as fv
from esphome.types import ConfigType
_LOGGER = logging.getLogger(__name__)
def AUTO_LOAD() -> list[str]:
@@ -50,6 +57,37 @@ CONFIG_SCHEMA = cv.All(
)
def _final_validate(config: ConfigType) -> ConfigType:
full_config = fv.full_config.get()
wifi_conf = full_config.get("wifi")
if wifi_conf is None:
# This shouldn't happen due to DEPENDENCIES = ["wifi"], but check anyway
raise cv.Invalid("Captive portal requires the wifi component to be configured")
if CONF_AP not in wifi_conf:
_LOGGER.warning(
"Captive portal is enabled but no WiFi AP is configured. "
"The captive portal will not be accessible. "
"Add 'ap:' to your WiFi configuration to enable the captive portal."
)
# Register socket needs for DNS server and additional HTTP connections
# - 1 UDP socket for DNS server
# - 3 additional TCP sockets for captive portal detection probes + configuration requests
# OS captive portal detection makes multiple probe requests that stay in TIME_WAIT.
# Need headroom for actual user configuration requests.
# LRU purging will reclaim idle sockets to prevent exhaustion from repeated attempts.
from esphome.components import socket
socket.consume_sockets(4, "captive_portal")(config)
return config
FINAL_VALIDATE_SCHEMA = _final_validate
@coroutine_with_priority(CoroPriority.CAPTIVE_PORTAL)
async def to_code(config):
paren = await cg.get_variable(config[CONF_WEB_SERVER_BASE_ID])

View File

@@ -50,8 +50,8 @@ void CaptivePortal::handle_wifisave(AsyncWebServerRequest *request) {
ESP_LOGI(TAG, "Requested WiFi Settings Change:");
ESP_LOGI(TAG, " SSID='%s'", ssid.c_str());
ESP_LOGI(TAG, " Password=" LOG_SECRET("'%s'"), psk.c_str());
wifi::global_wifi_component->save_wifi_sta(ssid, psk);
wifi::global_wifi_component->start_scanning();
// Defer save to main loop thread to avoid NVS operations from HTTP thread
this->defer([ssid, psk]() { wifi::global_wifi_component->save_wifi_sta(ssid, psk); });
request->redirect(ESPHOME_F("/?save"));
}
@@ -63,6 +63,12 @@ void CaptivePortal::start() {
this->base_->init();
if (!this->initialized_) {
this->base_->add_handler(this);
#ifdef USE_ESP32
// Enable LRU socket purging to handle captive portal detection probe bursts
// OS captive portal detection makes many simultaneous HTTP requests which can
// exhaust sockets. LRU purging automatically closes oldest idle connections.
this->base_->get_server()->set_lru_purge_enable(true);
#endif
}
network::IPAddress ip = wifi::global_wifi_component->wifi_soft_ap_ip();

View File

@@ -40,6 +40,10 @@ class CaptivePortal : public AsyncWebHandler, public Component {
void end() {
this->active_ = false;
this->disable_loop(); // Stop processing DNS requests
#ifdef USE_ESP32
// Disable LRU socket purging now that captive portal is done
this->base_->get_server()->set_lru_purge_enable(false);
#endif
this->base_->deinit();
if (this->dns_server_ != nullptr) {
this->dns_server_->stop();

View File

@@ -19,8 +19,9 @@ void CST816Touchscreen::continue_setup_() {
case CST816T_CHIP_ID:
break;
default:
ESP_LOGE(TAG, "Unknown chip ID: 0x%02X", this->chip_id_);
this->status_set_error("Unknown chip ID");
this->mark_failed();
this->status_set_error(str_sprintf("Unknown chip ID 0x%02X", this->chip_id_).c_str());
return;
}
this->write_byte(REG_IRQ_CTL, IRQ_EN_MOTION);

View File

@@ -102,7 +102,7 @@ def customise_schema(config):
"""
config = cv.Schema(
{
cv.Required(CONF_MODEL): cv.one_of(*MODELS, upper=True),
cv.Required(CONF_MODEL): cv.one_of(*MODELS, upper=True, space="-"),
},
extra=cv.ALLOW_EXTRA,
)(config)

View File

@@ -32,11 +32,15 @@ class SpectraE6(EpaperModel):
spectra_e6 = SpectraE6("spectra-e6")
spectra_e6.extend(
"Seeed-reTerminal-E1002",
spectra_e6_7p3 = spectra_e6.extend(
"7.3in-Spectra-E6",
width=800,
height=480,
data_rate="20MHz",
)
spectra_e6_7p3.extend(
"Seeed-reTerminal-E1002",
cs_pin=10,
dc_pin=11,
reset_pin=12,

View File

@@ -22,7 +22,6 @@ ES8311_BITS_PER_SAMPLE_ENUM = {
es8311_mic_gain = es8311_ns.enum("ES8311MicGain")
ES8311_MIC_GAIN_ENUM = {
"MIN": es8311_mic_gain.ES8311_MIC_GAIN_MIN,
"0DB": es8311_mic_gain.ES8311_MIC_GAIN_0DB,
"6DB": es8311_mic_gain.ES8311_MIC_GAIN_6DB,
"12DB": es8311_mic_gain.ES8311_MIC_GAIN_12DB,
@@ -31,7 +30,6 @@ ES8311_MIC_GAIN_ENUM = {
"30DB": es8311_mic_gain.ES8311_MIC_GAIN_30DB,
"36DB": es8311_mic_gain.ES8311_MIC_GAIN_36DB,
"42DB": es8311_mic_gain.ES8311_MIC_GAIN_42DB,
"MAX": es8311_mic_gain.ES8311_MIC_GAIN_MAX,
}

View File

@@ -381,8 +381,9 @@ PLATFORM_VERSION_LOOKUP = {
}
def _check_versions(value):
value = value.copy()
def _check_versions(config):
config = config.copy()
value = config[CONF_FRAMEWORK]
if value[CONF_VERSION] in PLATFORM_VERSION_LOOKUP:
if CONF_SOURCE in value or CONF_PLATFORM_VERSION in value:
@@ -447,7 +448,7 @@ def _check_versions(value):
"If there are connectivity or build issues please remove the manual version."
)
return value
return config
def _parse_platform_version(value):
@@ -497,6 +498,8 @@ def final_validate(config):
from esphome.components.psram import DOMAIN as PSRAM_DOMAIN
errs = []
conf_fw = config[CONF_FRAMEWORK]
advanced = conf_fw[CONF_ADVANCED]
full_config = fv.full_config.get()
if pio_options := full_config[CONF_ESPHOME].get(CONF_PLATFORMIO_OPTIONS):
pio_flash_size_key = "board_upload.flash_size"
@@ -513,22 +516,14 @@ def final_validate(config):
f"Please specify {CONF_FLASH_SIZE} within esp32 configuration only"
)
)
if (
config[CONF_VARIANT] != VARIANT_ESP32
and CONF_ADVANCED in (conf_fw := config[CONF_FRAMEWORK])
and CONF_IGNORE_EFUSE_MAC_CRC in conf_fw[CONF_ADVANCED]
):
if config[CONF_VARIANT] != VARIANT_ESP32 and advanced[CONF_IGNORE_EFUSE_MAC_CRC]:
errs.append(
cv.Invalid(
f"'{CONF_IGNORE_EFUSE_MAC_CRC}' is not supported on {config[CONF_VARIANT]}",
path=[CONF_FRAMEWORK, CONF_ADVANCED, CONF_IGNORE_EFUSE_MAC_CRC],
)
)
if (
config.get(CONF_FRAMEWORK, {})
.get(CONF_ADVANCED, {})
.get(CONF_EXECUTE_FROM_PSRAM)
):
if advanced[CONF_EXECUTE_FROM_PSRAM]:
if config[CONF_VARIANT] != VARIANT_ESP32S3:
errs.append(
cv.Invalid(
@@ -544,6 +539,17 @@ def final_validate(config):
)
)
if (
config[CONF_FLASH_SIZE] == "32MB"
and "ota" in full_config
and not advanced[CONF_ENABLE_IDF_EXPERIMENTAL_FEATURES]
):
errs.append(
cv.Invalid(
f"OTA with 32MB flash requires '{CONF_ENABLE_IDF_EXPERIMENTAL_FEATURES}' to be set in the '{CONF_ADVANCED}' section of the esp32 configuration",
path=[CONF_FLASH_SIZE],
)
)
if errs:
raise cv.MultipleInvalid(errs)
@@ -598,89 +604,74 @@ def _validate_idf_component(config: ConfigType) -> ConfigType:
FRAMEWORK_ESP_IDF = "esp-idf"
FRAMEWORK_ARDUINO = "arduino"
FRAMEWORK_SCHEMA = cv.All(
cv.Schema(
{
cv.Optional(CONF_TYPE, default=FRAMEWORK_ARDUINO): cv.one_of(
FRAMEWORK_ESP_IDF, FRAMEWORK_ARDUINO
),
cv.Optional(CONF_VERSION, default="recommended"): cv.string_strict,
cv.Optional(CONF_RELEASE): cv.string_strict,
cv.Optional(CONF_SOURCE): cv.string_strict,
cv.Optional(CONF_PLATFORM_VERSION): _parse_platform_version,
cv.Optional(CONF_SDKCONFIG_OPTIONS, default={}): {
cv.string_strict: cv.string_strict
},
cv.Optional(CONF_LOG_LEVEL, default="ERROR"): cv.one_of(
*LOG_LEVELS_IDF, upper=True
),
cv.Optional(CONF_ADVANCED, default={}): cv.Schema(
{
cv.Optional(CONF_ASSERTION_LEVEL): cv.one_of(
*ASSERTION_LEVELS, upper=True
),
cv.Optional(CONF_COMPILER_OPTIMIZATION, default="SIZE"): cv.one_of(
*COMPILER_OPTIMIZATIONS, upper=True
),
cv.Optional(CONF_ENABLE_IDF_EXPERIMENTAL_FEATURES): cv.boolean,
cv.Optional(CONF_ENABLE_LWIP_ASSERT, default=True): cv.boolean,
cv.Optional(
CONF_IGNORE_EFUSE_CUSTOM_MAC, default=False
): cv.boolean,
cv.Optional(CONF_IGNORE_EFUSE_MAC_CRC): cv.boolean,
# DHCP server is needed for WiFi AP mode. When WiFi component is used,
# it will handle disabling DHCP server when AP is not configured.
# Default to false (disabled) when WiFi is not used.
cv.OnlyWithout(
CONF_ENABLE_LWIP_DHCP_SERVER, "wifi", default=False
): cv.boolean,
cv.Optional(
CONF_ENABLE_LWIP_MDNS_QUERIES, default=True
): cv.boolean,
cv.Optional(
CONF_ENABLE_LWIP_BRIDGE_INTERFACE, default=False
): cv.boolean,
cv.Optional(
CONF_ENABLE_LWIP_TCPIP_CORE_LOCKING, default=True
): cv.boolean,
cv.Optional(
CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY, default=True
): cv.boolean,
cv.Optional(
CONF_DISABLE_LIBC_LOCKS_IN_IRAM, default=True
): cv.boolean,
cv.Optional(
CONF_DISABLE_VFS_SUPPORT_TERMIOS, default=True
): cv.boolean,
cv.Optional(
CONF_DISABLE_VFS_SUPPORT_SELECT, default=True
): cv.boolean,
cv.Optional(CONF_DISABLE_VFS_SUPPORT_DIR, default=True): cv.boolean,
cv.Optional(CONF_EXECUTE_FROM_PSRAM): cv.boolean,
cv.Optional(CONF_LOOP_TASK_STACK_SIZE, default=8192): cv.int_range(
min=8192, max=32768
),
}
),
cv.Optional(CONF_COMPONENTS, default=[]): cv.ensure_list(
cv.All(
cv.Schema(
{
cv.Required(CONF_NAME): cv.string_strict,
cv.Optional(CONF_SOURCE): cv.git_ref,
cv.Optional(CONF_REF): cv.string,
cv.Optional(CONF_PATH): cv.string,
cv.Optional(CONF_REFRESH): cv.All(
cv.string, cv.source_refresh
),
}
),
_validate_idf_component,
)
),
}
),
_check_versions,
FRAMEWORK_SCHEMA = cv.Schema(
{
cv.Optional(CONF_TYPE): cv.one_of(FRAMEWORK_ESP_IDF, FRAMEWORK_ARDUINO),
cv.Optional(CONF_VERSION, default="recommended"): cv.string_strict,
cv.Optional(CONF_RELEASE): cv.string_strict,
cv.Optional(CONF_SOURCE): cv.string_strict,
cv.Optional(CONF_PLATFORM_VERSION): _parse_platform_version,
cv.Optional(CONF_SDKCONFIG_OPTIONS, default={}): {
cv.string_strict: cv.string_strict
},
cv.Optional(CONF_LOG_LEVEL, default="ERROR"): cv.one_of(
*LOG_LEVELS_IDF, upper=True
),
cv.Optional(CONF_ADVANCED, default={}): cv.Schema(
{
cv.Optional(CONF_ASSERTION_LEVEL): cv.one_of(
*ASSERTION_LEVELS, upper=True
),
cv.Optional(CONF_COMPILER_OPTIMIZATION, default="SIZE"): cv.one_of(
*COMPILER_OPTIMIZATIONS, upper=True
),
cv.Optional(
CONF_ENABLE_IDF_EXPERIMENTAL_FEATURES, default=False
): cv.boolean,
cv.Optional(CONF_ENABLE_LWIP_ASSERT, default=True): cv.boolean,
cv.Optional(CONF_IGNORE_EFUSE_CUSTOM_MAC, default=False): cv.boolean,
cv.Optional(CONF_IGNORE_EFUSE_MAC_CRC, default=False): cv.boolean,
# DHCP server is needed for WiFi AP mode. When WiFi component is used,
# it will handle disabling DHCP server when AP is not configured.
# Default to false (disabled) when WiFi is not used.
cv.OnlyWithout(
CONF_ENABLE_LWIP_DHCP_SERVER, "wifi", default=False
): cv.boolean,
cv.Optional(CONF_ENABLE_LWIP_MDNS_QUERIES, default=True): cv.boolean,
cv.Optional(
CONF_ENABLE_LWIP_BRIDGE_INTERFACE, default=False
): cv.boolean,
cv.Optional(
CONF_ENABLE_LWIP_TCPIP_CORE_LOCKING, default=True
): cv.boolean,
cv.Optional(
CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY, default=True
): cv.boolean,
cv.Optional(CONF_DISABLE_LIBC_LOCKS_IN_IRAM, default=True): cv.boolean,
cv.Optional(CONF_DISABLE_VFS_SUPPORT_TERMIOS, default=True): cv.boolean,
cv.Optional(CONF_DISABLE_VFS_SUPPORT_SELECT, default=True): cv.boolean,
cv.Optional(CONF_DISABLE_VFS_SUPPORT_DIR, default=True): cv.boolean,
cv.Optional(CONF_EXECUTE_FROM_PSRAM, default=False): cv.boolean,
cv.Optional(CONF_LOOP_TASK_STACK_SIZE, default=8192): cv.int_range(
min=8192, max=32768
),
}
),
cv.Optional(CONF_COMPONENTS, default=[]): cv.ensure_list(
cv.All(
cv.Schema(
{
cv.Required(CONF_NAME): cv.string_strict,
cv.Optional(CONF_SOURCE): cv.git_ref,
cv.Optional(CONF_REF): cv.string,
cv.Optional(CONF_PATH): cv.string,
cv.Optional(CONF_REFRESH): cv.All(cv.string, cv.source_refresh),
}
),
_validate_idf_component,
)
),
}
)
@@ -743,11 +734,11 @@ def _show_framework_migration_message(name: str, variant: str) -> None:
def _set_default_framework(config):
config = config.copy()
if CONF_FRAMEWORK not in config:
config = config.copy()
variant = config[CONF_VARIANT]
config[CONF_FRAMEWORK] = FRAMEWORK_SCHEMA({})
if CONF_TYPE not in config[CONF_FRAMEWORK]:
variant = config[CONF_VARIANT]
if variant in ARDUINO_ALLOWED_VARIANTS:
config[CONF_FRAMEWORK][CONF_TYPE] = FRAMEWORK_ARDUINO
_show_framework_migration_message(
@@ -787,6 +778,7 @@ CONFIG_SCHEMA = cv.All(
),
_detect_variant,
_set_default_framework,
_check_versions,
set_core_data,
cv.has_at_least_one_key(CONF_BOARD, CONF_VARIANT),
)
@@ -805,9 +797,7 @@ def _configure_lwip_max_sockets(conf: dict) -> None:
from esphome.components.socket import KEY_SOCKET_CONSUMERS
# Check if user manually specified CONFIG_LWIP_MAX_SOCKETS
user_max_sockets = conf.get(CONF_SDKCONFIG_OPTIONS, {}).get(
"CONFIG_LWIP_MAX_SOCKETS"
)
user_max_sockets = conf[CONF_SDKCONFIG_OPTIONS].get("CONFIG_LWIP_MAX_SOCKETS")
socket_consumers: dict[str, int] = CORE.data.get(KEY_SOCKET_CONSUMERS, {})
total_sockets = sum(socket_consumers.values())
@@ -864,8 +854,13 @@ def _configure_lwip_max_sockets(conf: dict) -> None:
async def to_code(config):
cg.add_platformio_option("board", config[CONF_BOARD])
cg.add_platformio_option("board_upload.flash_size", config[CONF_FLASH_SIZE])
cg.add_platformio_option(
"board_upload.maximum_size",
int(config[CONF_FLASH_SIZE].removesuffix("MB")) * 1024 * 1024,
)
cg.set_cpp_standard("gnu++20")
cg.add_build_flag("-DUSE_ESP32")
cg.add_build_flag("-Wl,-z,noexecstack")
cg.add_define("ESPHOME_BOARD", config[CONF_BOARD])
variant = config[CONF_VARIANT]
cg.add_build_flag(f"-DUSE_ESP32_VARIANT_{variant}")
@@ -893,6 +888,12 @@ async def to_code(config):
CORE.relative_internal_path(".espressif")
)
add_extra_script(
"pre",
"pre_build.py",
Path(__file__).parent / "pre_build.py.script",
)
add_extra_script(
"post",
"post_build.py",
@@ -941,6 +942,12 @@ async def to_code(config):
add_idf_sdkconfig_option("CONFIG_MBEDTLS_CERTIFICATE_BUNDLE", True)
add_idf_sdkconfig_option("CONFIG_ESP_PHY_REDUCE_TX_POWER", True)
# ESP32-S2 Arduino: Disable USB Serial on boot to avoid TinyUSB dependency
if get_esp32_variant() == VARIANT_ESP32S2:
cg.add_build_unflag("-DARDUINO_USB_CDC_ON_BOOT=1")
cg.add_build_unflag("-DARDUINO_USB_CDC_ON_BOOT=0")
cg.add_build_flag("-DARDUINO_USB_CDC_ON_BOOT=0")
cg.add_build_flag("-Wno-nonnull-compare")
add_idf_sdkconfig_option(f"CONFIG_IDF_TARGET_{variant}", True)
@@ -977,23 +984,18 @@ async def to_code(config):
# WiFi component handles its own optimization when AP mode is not used
# When using Arduino with Ethernet, DHCP server functions must be available
# for the Network library to compile, even if not actively used
if (
CONF_ENABLE_LWIP_DHCP_SERVER in advanced
and not advanced[CONF_ENABLE_LWIP_DHCP_SERVER]
and not (
conf[CONF_TYPE] == FRAMEWORK_ARDUINO
and "ethernet" in CORE.loaded_integrations
)
if advanced.get(CONF_ENABLE_LWIP_DHCP_SERVER) is False and not (
conf[CONF_TYPE] == FRAMEWORK_ARDUINO and "ethernet" in CORE.loaded_integrations
):
add_idf_sdkconfig_option("CONFIG_LWIP_DHCPS", False)
if not advanced.get(CONF_ENABLE_LWIP_MDNS_QUERIES, True):
if not advanced[CONF_ENABLE_LWIP_MDNS_QUERIES]:
add_idf_sdkconfig_option("CONFIG_LWIP_DNS_SUPPORT_MDNS_QUERIES", False)
if not advanced.get(CONF_ENABLE_LWIP_BRIDGE_INTERFACE, False):
if not advanced[CONF_ENABLE_LWIP_BRIDGE_INTERFACE]:
add_idf_sdkconfig_option("CONFIG_LWIP_BRIDGEIF_MAX_PORTS", 0)
_configure_lwip_max_sockets(conf)
if advanced.get(CONF_EXECUTE_FROM_PSRAM, False):
if advanced[CONF_EXECUTE_FROM_PSRAM]:
add_idf_sdkconfig_option("CONFIG_SPIRAM_FETCH_INSTRUCTIONS", True)
add_idf_sdkconfig_option("CONFIG_SPIRAM_RODATA", True)
@@ -1004,23 +1006,22 @@ async def to_code(config):
# - select() on 4 sockets: ~190μs (Arduino/core locking) vs ~235μs (ESP-IDF default)
# - Up to 200% slower under load when all operations queue through tcpip_thread
# Enabling this makes ESP-IDF socket performance match Arduino framework.
if advanced.get(CONF_ENABLE_LWIP_TCPIP_CORE_LOCKING, True):
if advanced[CONF_ENABLE_LWIP_TCPIP_CORE_LOCKING]:
add_idf_sdkconfig_option("CONFIG_LWIP_TCPIP_CORE_LOCKING", True)
if advanced.get(CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY, True):
if advanced[CONF_ENABLE_LWIP_CHECK_THREAD_SAFETY]:
add_idf_sdkconfig_option("CONFIG_LWIP_CHECK_THREAD_SAFETY", True)
# Disable placing libc locks in IRAM to save RAM
# This is safe for ESPHome since no IRAM ISRs (interrupts that run while cache is disabled)
# use libc lock APIs. Saves approximately 1.3KB (1,356 bytes) of IRAM.
if advanced.get(CONF_DISABLE_LIBC_LOCKS_IN_IRAM, True):
if advanced[CONF_DISABLE_LIBC_LOCKS_IN_IRAM]:
add_idf_sdkconfig_option("CONFIG_LIBC_LOCKS_PLACE_IN_IRAM", False)
# Disable VFS support for termios (terminal I/O functions)
# ESPHome doesn't use termios functions on ESP32 (only used in host UART driver).
# Saves approximately 1.8KB of flash when disabled (default).
add_idf_sdkconfig_option(
"CONFIG_VFS_SUPPORT_TERMIOS",
not advanced.get(CONF_DISABLE_VFS_SUPPORT_TERMIOS, True),
"CONFIG_VFS_SUPPORT_TERMIOS", not advanced[CONF_DISABLE_VFS_SUPPORT_TERMIOS]
)
# Disable VFS support for select() with file descriptors
@@ -1034,8 +1035,7 @@ async def to_code(config):
else:
# No component needs it - allow user to control (default: disabled)
add_idf_sdkconfig_option(
"CONFIG_VFS_SUPPORT_SELECT",
not advanced.get(CONF_DISABLE_VFS_SUPPORT_SELECT, True),
"CONFIG_VFS_SUPPORT_SELECT", not advanced[CONF_DISABLE_VFS_SUPPORT_SELECT]
)
# Disable VFS support for directory functions (opendir, readdir, mkdir, etc.)
@@ -1048,8 +1048,7 @@ async def to_code(config):
else:
# No component needs it - allow user to control (default: disabled)
add_idf_sdkconfig_option(
"CONFIG_VFS_SUPPORT_DIR",
not advanced.get(CONF_DISABLE_VFS_SUPPORT_DIR, True),
"CONFIG_VFS_SUPPORT_DIR", not advanced[CONF_DISABLE_VFS_SUPPORT_DIR]
)
cg.add_platformio_option("board_build.partitions", "partitions.csv")
@@ -1063,7 +1062,7 @@ async def to_code(config):
add_idf_sdkconfig_option(flag, assertion_level == key)
add_idf_sdkconfig_option("CONFIG_COMPILER_OPTIMIZATION_DEFAULT", False)
compiler_optimization = advanced.get(CONF_COMPILER_OPTIMIZATION)
compiler_optimization = advanced[CONF_COMPILER_OPTIMIZATION]
for key, flag in COMPILER_OPTIMIZATIONS.items():
add_idf_sdkconfig_option(flag, compiler_optimization == key)
@@ -1072,18 +1071,20 @@ async def to_code(config):
conf[CONF_ADVANCED][CONF_ENABLE_LWIP_ASSERT],
)
if advanced.get(CONF_IGNORE_EFUSE_MAC_CRC):
if advanced[CONF_IGNORE_EFUSE_MAC_CRC]:
add_idf_sdkconfig_option("CONFIG_ESP_MAC_IGNORE_MAC_CRC_ERROR", True)
add_idf_sdkconfig_option("CONFIG_ESP_PHY_CALIBRATION_AND_DATA_STORAGE", False)
if advanced.get(CONF_ENABLE_IDF_EXPERIMENTAL_FEATURES):
if advanced[CONF_ENABLE_IDF_EXPERIMENTAL_FEATURES]:
_LOGGER.warning(
"Using experimental features in ESP-IDF may result in unexpected failures."
)
add_idf_sdkconfig_option("CONFIG_IDF_EXPERIMENTAL_FEATURES", True)
if config[CONF_FLASH_SIZE] == "32MB":
add_idf_sdkconfig_option(
"CONFIG_BOOTLOADER_CACHE_32BIT_ADDR_QUAD_FLASH", True
)
cg.add_define(
"ESPHOME_LOOP_TASK_STACK_SIZE", advanced.get(CONF_LOOP_TASK_STACK_SIZE)
)
cg.add_define("ESPHOME_LOOP_TASK_STACK_SIZE", advanced[CONF_LOOP_TASK_STACK_SIZE])
cg.add_define(
"USE_ESP_IDF_VERSION_CODE",

View File

@@ -0,0 +1,9 @@
Import("env") # noqa: F821
# Remove custom_sdkconfig from the board config as it causes
# pioarduino to enable some strange hybrid build mode that breaks IDF
board = env.BoardConfig()
if "espidf.custom_sdkconfig" in board:
del board._manifest["espidf"]["custom_sdkconfig"]
if not board._manifest["espidf"]:
del board._manifest["espidf"]

View File

@@ -93,9 +93,9 @@ async def to_code(config):
framework_ver: cv.Version = CORE.data[KEY_CORE][KEY_FRAMEWORK_VERSION]
os.environ["ESP_IDF_VERSION"] = f"{framework_ver.major}.{framework_ver.minor}"
if framework_ver >= cv.Version(5, 5, 0):
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="1.1.5")
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="1.2.2")
esp32.add_idf_component(name="espressif/eppp_link", ref="1.1.3")
esp32.add_idf_component(name="espressif/esp_hosted", ref="2.6.1")
esp32.add_idf_component(name="espressif/esp_hosted", ref="2.7.0")
else:
esp32.add_idf_component(name="espressif/esp_wifi_remote", ref="0.13.0")
esp32.add_idf_component(name="espressif/eppp_link", ref="0.2.0")

View File

@@ -22,6 +22,11 @@ constexpr size_t CHUNK_SIZE = 1500;
void Esp32HostedUpdate::setup() {
this->update_info_.title = "ESP32 Hosted Coprocessor";
// if wifi is not present, connect to the coprocessor
#ifndef USE_WIFI
esp_hosted_connect_to_slave(); // NOLINT
#endif
// get coprocessor version
esp_hosted_coprocessor_fwver_t ver_info;
if (esp_hosted_get_coprocessor_fwversion(&ver_info) == ESP_OK) {

View File

@@ -20,6 +20,10 @@ CONF_ON_STOP = "on_stop"
CONF_STATUS_INDICATOR = "status_indicator"
CONF_WIFI_TIMEOUT = "wifi_timeout"
# Default WiFi timeout - aligned with WiFi component ap_timeout
# Allows sufficient time to try all BSSIDs before starting provisioning mode
DEFAULT_WIFI_TIMEOUT = "90s"
improv_ns = cg.esphome_ns.namespace("improv")
Error = improv_ns.enum("Error")
@@ -59,7 +63,7 @@ CONFIG_SCHEMA = (
CONF_AUTHORIZED_DURATION, default="1min"
): cv.positive_time_period_milliseconds,
cv.Optional(
CONF_WIFI_TIMEOUT, default="1min"
CONF_WIFI_TIMEOUT, default=DEFAULT_WIFI_TIMEOUT
): cv.positive_time_period_milliseconds,
cv.Optional(CONF_ON_PROVISIONED): automation.validate_automation(
{

View File

@@ -127,6 +127,7 @@ void ESP32ImprovComponent::loop() {
// Set initial state based on whether we have an authorizer
this->set_state_(this->get_initial_state_(), false);
this->set_error_(improv::ERROR_NONE);
this->should_start_ = false; // Clear flag after starting
ESP_LOGD(TAG, "Service started!");
}
}

View File

@@ -45,6 +45,7 @@ class ESP32ImprovComponent : public Component, public improv_base::ImprovBase {
void start();
void stop();
bool is_active() const { return this->state_ != improv::STATE_STOPPED; }
bool should_start() const { return this->should_start_; }
#ifdef USE_ESP32_IMPROV_STATE_CALLBACK
void add_on_state_callback(std::function<void(improv::State, improv::Error)> &&callback) {

View File

@@ -14,8 +14,8 @@ void EspLdo::setup() {
config.flags.adjustable = this->adjustable_;
auto err = esp_ldo_acquire_channel(&config, &this->handle_);
if (err != ESP_OK) {
auto msg = str_sprintf("Failed to acquire LDO channel %d with voltage %fV", this->channel_, this->voltage_);
this->mark_failed(msg.c_str());
ESP_LOGE(TAG, "Failed to acquire LDO channel %d with voltage %fV", this->channel_, this->voltage_);
this->mark_failed("Failed to acquire LDO channel");
} else {
ESP_LOGD(TAG, "Acquired LDO channel %d with voltage %fV", this->channel_, this->voltage_);
}

View File

@@ -10,6 +10,7 @@
#include <esp_event.h>
#include <esp_mac.h>
#include <esp_netif.h>
#include <esp_now.h>
#include <esp_random.h>
#include <esp_wifi.h>
@@ -157,6 +158,12 @@ bool ESPNowComponent::is_wifi_enabled() {
}
void ESPNowComponent::setup() {
#ifndef USE_WIFI
// Initialize LwIP stack for wake_loop_threadsafe() socket support
// When WiFi component is present, it handles esp_netif_init()
ESP_ERROR_CHECK(esp_netif_init());
#endif
if (this->enable_on_boot_) {
this->enable_();
} else {

View File

@@ -337,7 +337,7 @@ void Graph::draw_legend(display::Display *buff, uint16_t x_offset, uint16_t y_of
return;
/// Plot border
if (this->border_) {
if (legend_->border_) {
int w = legend_->width_;
int h = legend_->height_;
buff->horizontal_line(x_offset, y_offset, w, color);

View File

@@ -189,7 +189,7 @@ template<typename... Ts> class EnrollmentAction : public Action<Ts...>, public P
TEMPLATABLE_VALUE(std::string, name)
TEMPLATABLE_VALUE(uint8_t, direction)
void play(Ts... x) override {
void play(const Ts &...x) override {
auto name = this->name_.value(x...);
auto direction = (HlkFm22xFaceDirection) this->direction_.value(x...);
this->parent_->enroll_face(name, direction);
@@ -200,7 +200,7 @@ template<typename... Ts> class DeleteAction : public Action<Ts...>, public Paren
public:
TEMPLATABLE_VALUE(int16_t, face_id)
void play(Ts... x) override {
void play(const Ts &...x) override {
auto face_id = this->face_id_.value(x...);
this->parent_->delete_face(face_id);
}
@@ -208,17 +208,17 @@ template<typename... Ts> class DeleteAction : public Action<Ts...>, public Paren
template<typename... Ts> class DeleteAllAction : public Action<Ts...>, public Parented<HlkFm22xComponent> {
public:
void play(Ts... x) override { this->parent_->delete_all_faces(); }
void play(const Ts &...x) override { this->parent_->delete_all_faces(); }
};
template<typename... Ts> class ScanAction : public Action<Ts...>, public Parented<HlkFm22xComponent> {
public:
void play(Ts... x) override { this->parent_->scan_face(); }
void play(const Ts &...x) override { this->parent_->scan_face(); }
};
template<typename... Ts> class ResetAction : public Action<Ts...>, public Parented<HlkFm22xComponent> {
public:
void play(Ts... x) override { this->parent_->reset(); }
void play(const Ts &...x) override { this->parent_->reset(); }
};
} // namespace esphome::hlk_fm22x

View File

@@ -49,18 +49,18 @@ void HttpRequestUpdate::update_task(void *params) {
auto container = this_update->request_parent_->get(this_update->source_url_);
if (container == nullptr || container->status_code != HTTP_STATUS_OK) {
std::string msg = str_sprintf("Failed to fetch manifest from %s", this_update->source_url_.c_str());
ESP_LOGE(TAG, "Failed to fetch manifest from %s", this_update->source_url_.c_str());
// Defer to main loop to avoid race condition on component_state_ read-modify-write
this_update->defer([this_update, msg]() { this_update->status_set_error(msg.c_str()); });
this_update->defer([this_update]() { this_update->status_set_error("Failed to fetch manifest"); });
UPDATE_RETURN;
}
RAMAllocator<uint8_t> allocator;
uint8_t *data = allocator.allocate(container->content_length);
if (data == nullptr) {
std::string msg = str_sprintf("Failed to allocate %zu bytes for manifest", container->content_length);
ESP_LOGE(TAG, "Failed to allocate %zu bytes for manifest", container->content_length);
// Defer to main loop to avoid race condition on component_state_ read-modify-write
this_update->defer([this_update, msg]() { this_update->status_set_error(msg.c_str()); });
this_update->defer([this_update]() { this_update->status_set_error("Failed to allocate memory for manifest"); });
container->end();
UPDATE_RETURN;
}
@@ -121,9 +121,9 @@ void HttpRequestUpdate::update_task(void *params) {
}
if (!valid) {
std::string msg = str_sprintf("Failed to parse JSON from %s", this_update->source_url_.c_str());
ESP_LOGE(TAG, "Failed to parse JSON from %s", this_update->source_url_.c_str());
// Defer to main loop to avoid race condition on component_state_ read-modify-write
this_update->defer([this_update, msg]() { this_update->status_set_error(msg.c_str()); });
this_update->defer([this_update]() { this_update->status_set_error("Failed to parse manifest JSON"); });
UPDATE_RETURN;
}

View File

@@ -10,7 +10,7 @@ namespace jsn_sr04t {
static const char *const TAG = "jsn_sr04t.sensor";
void Jsnsr04tComponent::update() {
this->write_byte(0x55);
this->write_byte((this->model_ == AJ_SR04M) ? 0x01 : 0x55);
ESP_LOGV(TAG, "Request read out from sensor");
}
@@ -31,19 +31,10 @@ void Jsnsr04tComponent::loop() {
}
void Jsnsr04tComponent::check_buffer_() {
uint8_t checksum = 0;
switch (this->model_) {
case JSN_SR04T:
checksum = this->buffer_[0] + this->buffer_[1] + this->buffer_[2];
break;
case AJ_SR04M:
checksum = this->buffer_[1] + this->buffer_[2];
break;
}
uint8_t checksum = this->buffer_[0] + this->buffer_[1] + this->buffer_[2];
if (this->buffer_[3] == checksum) {
uint16_t distance = encode_uint16(this->buffer_[1], this->buffer_[2]);
if (distance > 250) {
if (distance > ((this->model_ == AJ_SR04M) ? 200 : 250)) {
float meters = distance / 1000.0f;
ESP_LOGV(TAG, "Distance from sensor: %umm, %.3fm", distance, meters);
this->publish_state(meters);

View File

@@ -31,35 +31,83 @@ CONFIG_SCHEMA = cv.Schema(
cv.GenerateID(CONF_LD2410_ID): cv.use_id(LD2410Component),
cv.Optional(CONF_MOVING_DISTANCE): sensor.sensor_schema(
device_class=DEVICE_CLASS_DISTANCE,
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_SIGNAL,
unit_of_measurement=UNIT_CENTIMETER,
),
cv.Optional(CONF_STILL_DISTANCE): sensor.sensor_schema(
device_class=DEVICE_CLASS_DISTANCE,
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_SIGNAL,
unit_of_measurement=UNIT_CENTIMETER,
),
cv.Optional(CONF_MOVING_ENERGY): sensor.sensor_schema(
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_MOTION_SENSOR,
unit_of_measurement=UNIT_PERCENT,
),
cv.Optional(CONF_STILL_ENERGY): sensor.sensor_schema(
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_FLASH,
unit_of_measurement=UNIT_PERCENT,
),
cv.Optional(CONF_LIGHT): sensor.sensor_schema(
device_class=DEVICE_CLASS_ILLUMINANCE,
entity_category=ENTITY_CATEGORY_DIAGNOSTIC,
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_LIGHTBULB,
),
cv.Optional(CONF_DETECTION_DISTANCE): sensor.sensor_schema(
device_class=DEVICE_CLASS_DISTANCE,
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_SIGNAL,
unit_of_measurement=UNIT_CENTIMETER,
),
@@ -73,7 +121,13 @@ CONFIG_SCHEMA = CONFIG_SCHEMA.extend(
cv.Optional(CONF_MOVE_ENERGY): sensor.sensor_schema(
entity_category=ENTITY_CATEGORY_DIAGNOSTIC,
filters=[
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_MOTION_SENSOR,
unit_of_measurement=UNIT_PERCENT,
@@ -81,7 +135,13 @@ CONFIG_SCHEMA = CONFIG_SCHEMA.extend(
cv.Optional(CONF_STILL_ENERGY): sensor.sensor_schema(
entity_category=ENTITY_CATEGORY_DIAGNOSTIC,
filters=[
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_FLASH,
unit_of_measurement=UNIT_PERCENT,

View File

@@ -31,36 +31,84 @@ CONFIG_SCHEMA = cv.Schema(
cv.GenerateID(CONF_LD2412_ID): cv.use_id(LD2412Component),
cv.Optional(CONF_DETECTION_DISTANCE): sensor.sensor_schema(
device_class=DEVICE_CLASS_DISTANCE,
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_SIGNAL,
unit_of_measurement=UNIT_CENTIMETER,
),
cv.Optional(CONF_LIGHT): sensor.sensor_schema(
device_class=DEVICE_CLASS_ILLUMINANCE,
entity_category=ENTITY_CATEGORY_DIAGNOSTIC,
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_LIGHTBULB,
unit_of_measurement=UNIT_EMPTY, # No standard unit for this light sensor
),
cv.Optional(CONF_MOVING_DISTANCE): sensor.sensor_schema(
device_class=DEVICE_CLASS_DISTANCE,
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_SIGNAL,
unit_of_measurement=UNIT_CENTIMETER,
),
cv.Optional(CONF_MOVING_ENERGY): sensor.sensor_schema(
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_MOTION_SENSOR,
unit_of_measurement=UNIT_PERCENT,
),
cv.Optional(CONF_STILL_DISTANCE): sensor.sensor_schema(
device_class=DEVICE_CLASS_DISTANCE,
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_SIGNAL,
unit_of_measurement=UNIT_CENTIMETER,
),
cv.Optional(CONF_STILL_ENERGY): sensor.sensor_schema(
filters=[{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}],
filters=[
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_FLASH,
unit_of_measurement=UNIT_PERCENT,
),
@@ -74,7 +122,13 @@ CONFIG_SCHEMA = CONFIG_SCHEMA.extend(
cv.Optional(CONF_MOVE_ENERGY): sensor.sensor_schema(
entity_category=ENTITY_CATEGORY_DIAGNOSTIC,
filters=[
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_MOTION_SENSOR,
unit_of_measurement=UNIT_PERCENT,
@@ -82,7 +136,13 @@ CONFIG_SCHEMA = CONFIG_SCHEMA.extend(
cv.Optional(CONF_STILL_ENERGY): sensor.sensor_schema(
entity_category=ENTITY_CATEGORY_DIAGNOSTIC,
filters=[
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)}
{
"timeout": {
"timeout": cv.TimePeriod(milliseconds=1000),
"value": "last",
}
},
{"throttle_with_priority": cv.TimePeriod(milliseconds=1000)},
],
icon=ICON_FLASH,
unit_of_measurement=UNIT_PERCENT,

View File

@@ -205,8 +205,10 @@ void LD2420Component::dump_config() {
LOG_BUTTON(" ", "Factory Reset:", this->factory_reset_button_);
LOG_BUTTON(" ", "Restart Module:", this->restart_module_button_);
#endif
#ifdef USE_SELECT
ESP_LOGCONFIG(TAG, "Select:");
LOG_SELECT(" ", "Operating Mode", this->operating_selector_);
#endif
if (ld2420::get_firmware_int(this->firmware_ver_) < CALIBRATE_VERSION_MIN) {
ESP_LOGW(TAG, "Firmware version %s and older supports Simple Mode only", this->firmware_ver_);
}
@@ -238,12 +240,20 @@ void LD2420Component::setup() {
memcpy(&this->new_config, &this->current_config, sizeof(this->current_config));
if (ld2420::get_firmware_int(this->firmware_ver_) < CALIBRATE_VERSION_MIN) {
this->set_operating_mode(OP_SIMPLE_MODE_STRING);
this->operating_selector_->publish_state(OP_SIMPLE_MODE_STRING);
#ifdef USE_SELECT
if (this->operating_selector_ != nullptr) {
this->operating_selector_->publish_state(OP_SIMPLE_MODE_STRING);
}
#endif
this->set_mode_(CMD_SYSTEM_MODE_SIMPLE);
ESP_LOGW(TAG, "Firmware version %s and older supports Simple Mode only", this->firmware_ver_);
} else {
this->set_mode_(CMD_SYSTEM_MODE_ENERGY);
this->operating_selector_->publish_state(OP_NORMAL_MODE_STRING);
#ifdef USE_SELECT
if (this->operating_selector_ != nullptr) {
this->operating_selector_->publish_state(OP_NORMAL_MODE_STRING);
}
#endif
}
#ifdef USE_NUMBER
this->init_gate_config_numbers();
@@ -383,8 +393,12 @@ void LD2420Component::set_operating_mode(const char *state) {
// If unsupported firmware ignore mode select
if (ld2420::get_firmware_int(firmware_ver_) >= CALIBRATE_VERSION_MIN) {
this->current_operating_mode = find_uint8(OP_MODE_BY_STR, state);
// Entering Auto Calibrate we need to clear the privoiuos data collection
this->operating_selector_->publish_state(state);
// Entering Auto Calibrate we need to clear the previous data collection
#ifdef USE_SELECT
if (this->operating_selector_ != nullptr) {
this->operating_selector_->publish_state(state);
}
#endif
if (current_operating_mode == OP_CALIBRATE_MODE) {
this->set_calibration_(true);
for (uint8_t gate = 0; gate < TOTAL_GATES; gate++) {
@@ -404,7 +418,11 @@ void LD2420Component::set_operating_mode(const char *state) {
}
} else {
this->current_operating_mode = OP_SIMPLE_MODE;
this->operating_selector_->publish_state(OP_SIMPLE_MODE_STRING);
#ifdef USE_SELECT
if (this->operating_selector_ != nullptr) {
this->operating_selector_->publish_state(OP_SIMPLE_MODE_STRING);
}
#endif
}
}

View File

@@ -52,8 +52,10 @@ static void log_invalid_parameter(const char *name, const LogString *message) {
}
static const LogString *color_mode_to_human(ColorMode color_mode) {
if (color_mode == ColorMode::UNKNOWN)
return LOG_STR("Unknown");
if (color_mode == ColorMode::ON_OFF)
return LOG_STR("On/Off");
if (color_mode == ColorMode::BRIGHTNESS)
return LOG_STR("Brightness");
if (color_mode == ColorMode::WHITE)
return LOG_STR("White");
if (color_mode == ColorMode::COLOR_TEMPERATURE)
@@ -68,7 +70,7 @@ static const LogString *color_mode_to_human(ColorMode color_mode) {
return LOG_STR("RGB + cold/warm white");
if (color_mode == ColorMode::RGB_COLOR_TEMPERATURE)
return LOG_STR("RGB + color temperature");
return LOG_STR("");
return LOG_STR("Unknown");
}
// Helper to log percentage values

View File

@@ -174,7 +174,7 @@ void LTRAlsPs501Component::loop() {
break;
case State::WAITING_FOR_DATA:
if (this->is_als_data_ready_(this->als_readings_) == DataAvail::DATA_OK) {
if (this->is_als_data_ready_(this->als_readings_) == LtrDataAvail::LTR_DATA_OK) {
tries = 0;
ESP_LOGV(TAG, "Reading sensor data assuming gain = %.0fx, time = %d ms",
get_gain_coeff(this->als_readings_.gain), get_itime_ms(this->als_readings_.integration_time));
@@ -379,18 +379,18 @@ void LTRAlsPs501Component::configure_integration_time_(IntegrationTime501 time)
}
}
DataAvail LTRAlsPs501Component::is_als_data_ready_(AlsReadings &data) {
LtrDataAvail LTRAlsPs501Component::is_als_data_ready_(AlsReadings &data) {
AlsPsStatusRegister als_status{0};
als_status.raw = this->reg((uint8_t) CommandRegisters::ALS_PS_STATUS).get();
if (!als_status.als_new_data)
return DataAvail::NO_DATA;
return LtrDataAvail::LTR_NO_DATA;
ESP_LOGV(TAG, "Data ready, reported gain is %.0fx", get_gain_coeff(als_status.gain));
if (data.gain != als_status.gain) {
ESP_LOGW(TAG, "Actual gain differs from requested (%.0f)", get_gain_coeff(data.gain));
return DataAvail::BAD_DATA;
return LtrDataAvail::LTR_BAD_DATA;
}
data.gain = als_status.gain;
return DataAvail::DATA_OK;
return LtrDataAvail::LTR_DATA_OK;
}
void LTRAlsPs501Component::read_sensor_data_(AlsReadings &data) {

View File

@@ -11,7 +11,7 @@
namespace esphome {
namespace ltr501 {
enum DataAvail : uint8_t { NO_DATA, BAD_DATA, DATA_OK };
enum LtrDataAvail : uint8_t { LTR_NO_DATA, LTR_BAD_DATA, LTR_DATA_OK };
enum LtrType : uint8_t {
LTR_TYPE_UNKNOWN = 0,
@@ -106,7 +106,7 @@ class LTRAlsPs501Component : public PollingComponent, public i2c::I2CDevice {
void configure_als_();
void configure_integration_time_(IntegrationTime501 time);
void configure_gain_(AlsGain501 gain);
DataAvail is_als_data_ready_(AlsReadings &data);
LtrDataAvail is_als_data_ready_(AlsReadings &data);
void read_sensor_data_(AlsReadings &data);
bool are_adjustments_required_(AlsReadings &data);
void apply_lux_calculation_(AlsReadings &data);

View File

@@ -165,7 +165,7 @@ void LTRAlsPsComponent::loop() {
break;
case State::WAITING_FOR_DATA:
if (this->is_als_data_ready_(this->als_readings_) == DataAvail::DATA_OK) {
if (this->is_als_data_ready_(this->als_readings_) == LtrDataAvail::LTR_DATA_OK) {
tries = 0;
ESP_LOGV(TAG, "Reading sensor data having gain = %.0fx, time = %d ms", get_gain_coeff(this->als_readings_.gain),
get_itime_ms(this->als_readings_.integration_time));
@@ -376,23 +376,23 @@ void LTRAlsPsComponent::configure_integration_time_(IntegrationTime time) {
}
}
DataAvail LTRAlsPsComponent::is_als_data_ready_(AlsReadings &data) {
LtrDataAvail LTRAlsPsComponent::is_als_data_ready_(AlsReadings &data) {
AlsPsStatusRegister als_status{0};
als_status.raw = this->reg((uint8_t) CommandRegisters::ALS_PS_STATUS).get();
if (!als_status.als_new_data)
return DataAvail::NO_DATA;
return LtrDataAvail::LTR_NO_DATA;
if (als_status.data_invalid) {
ESP_LOGW(TAG, "Data available but not valid");
return DataAvail::BAD_DATA;
return LtrDataAvail::LTR_BAD_DATA;
}
ESP_LOGV(TAG, "Data ready, reported gain is %.0f", get_gain_coeff(als_status.gain));
if (data.gain != als_status.gain) {
ESP_LOGW(TAG, "Actual gain differs from requested (%.0f)", get_gain_coeff(data.gain));
return DataAvail::BAD_DATA;
return LtrDataAvail::LTR_BAD_DATA;
}
return DataAvail::DATA_OK;
return LtrDataAvail::LTR_DATA_OK;
}
void LTRAlsPsComponent::read_sensor_data_(AlsReadings &data) {

View File

@@ -11,7 +11,7 @@
namespace esphome {
namespace ltr_als_ps {
enum DataAvail : uint8_t { NO_DATA, BAD_DATA, DATA_OK };
enum LtrDataAvail : uint8_t { LTR_NO_DATA, LTR_BAD_DATA, LTR_DATA_OK };
enum LtrType : uint8_t {
LTR_TYPE_UNKNOWN = 0,
@@ -106,7 +106,7 @@ class LTRAlsPsComponent : public PollingComponent, public i2c::I2CDevice {
void configure_als_();
void configure_integration_time_(IntegrationTime time);
void configure_gain_(AlsGain gain);
DataAvail is_als_data_ready_(AlsReadings &data);
LtrDataAvail is_als_data_ready_(AlsReadings &data);
void read_sensor_data_(AlsReadings &data);
bool are_adjustments_required_(AlsReadings &data);
void apply_lux_calculation_(AlsReadings &data);

View File

@@ -36,6 +36,8 @@ from .defines import (
)
from .lv_validation import padding, size
CONF_MULTIPLE_WIDGETS_PER_CELL = "multiple_widgets_per_cell"
cell_alignments = LV_CELL_ALIGNMENTS.one_of
grid_alignments = LV_GRID_ALIGNMENTS.one_of
flex_alignments = LV_FLEX_ALIGNMENTS.one_of
@@ -220,6 +222,7 @@ class GridLayout(Layout):
cv.Optional(CONF_GRID_ROW_ALIGN): grid_alignments,
cv.Optional(CONF_PAD_ROW): padding,
cv.Optional(CONF_PAD_COLUMN): padding,
cv.Optional(CONF_MULTIPLE_WIDGETS_PER_CELL, default=False): cv.boolean,
},
{
cv.Optional(CONF_GRID_CELL_ROW_POS): cv.positive_int,
@@ -263,6 +266,7 @@ class GridLayout(Layout):
# should be guaranteed to be a dict at this point
assert isinstance(layout, dict)
assert layout.get(CONF_TYPE).lower() == TYPE_GRID
allow_multiple = layout.get(CONF_MULTIPLE_WIDGETS_PER_CELL, False)
rows = len(layout[CONF_GRID_ROWS])
columns = len(layout[CONF_GRID_COLUMNS])
used_cells = [[None] * columns for _ in range(rows)]
@@ -299,7 +303,10 @@ class GridLayout(Layout):
f"exceeds grid size {rows}x{columns}",
[CONF_WIDGETS, index],
)
if used_cells[row + i][column + j] is not None:
if (
not allow_multiple
and used_cells[row + i][column + j] is not None
):
raise cv.Invalid(
f"Cell span {row + i}/{column + j} already occupied by widget at index {used_cells[row + i][column + j]}",
[CONF_WIDGETS, index],

View File

@@ -29,15 +29,18 @@ class LVGLNumber : public number::Number, public Component {
this->publish_state(value);
}
void on_value() { this->publish_state(this->value_lambda_()); }
void on_value() { this->publish_(this->value_lambda_()); }
protected:
void control(float value) override {
this->control_lambda_(value);
void publish_(float value) {
this->publish_state(value);
if (this->restore_)
this->pref_.save(&value);
}
void control(float value) override {
this->control_lambda_(value);
this->publish_(value);
}
std::function<void(float)> control_lambda_;
std::function<float()> value_lambda_;
lv_event_code_t event_;

View File

@@ -1,6 +1,7 @@
from esphome import config_validation as cv
from esphome.automation import Trigger, validate_automation
from esphome.components.time import RealTimeClock
from esphome.config_validation import prepend_path
from esphome.const import (
CONF_ARGS,
CONF_FORMAT,
@@ -422,7 +423,10 @@ def any_widget_schema(extras=None):
def validator(value):
if isinstance(value, dict):
# Convert to list
is_dict = True
value = [{k: v} for k, v in value.items()]
else:
is_dict = False
if not isinstance(value, list):
raise cv.Invalid("Expected a list of widgets")
result = []
@@ -443,7 +447,9 @@ def any_widget_schema(extras=None):
)
# Apply custom validation
value = widget_type.validate(value or {})
result.append({key: container_validator(value)})
path = [key] if is_dict else [index, key]
with prepend_path(path):
result.append({key: container_validator(value)})
return result
return validator

View File

@@ -1,6 +1,7 @@
from esphome import automation
import esphome.config_validation as cv
from esphome.const import CONF_ID, CONF_RANGE_FROM, CONF_RANGE_TO, CONF_STEP, CONF_VALUE
from esphome.cpp_generator import MockObj
from ..automation import action_to_code
from ..defines import (
@@ -114,7 +115,9 @@ class SpinboxType(WidgetType):
w.obj, digits, digits - config[CONF_DECIMAL_PLACES]
)
if (value := config.get(CONF_VALUE)) is not None:
lv.spinbox_set_value(w.obj, await lv_float.process(value))
lv.spinbox_set_value(
w.obj, MockObj(await lv_float.process(value)) * w.get_scale()
)
def get_scale(self, config):
return 10 ** config[CONF_DECIMAL_PLACES]

View File

@@ -11,6 +11,12 @@ static bool notify_refresh_ready(esp_lcd_panel_handle_t panel, esp_lcd_dpi_panel
xSemaphoreGiveFromISR(sem, &need_yield);
return (need_yield == pdTRUE);
}
void MIPI_DSI::smark_failed(const char *message, esp_err_t err) {
ESP_LOGE(TAG, "%s: %s", message, esp_err_to_name(err));
this->mark_failed(message);
}
void MIPI_DSI::setup() {
ESP_LOGCONFIG(TAG, "Running Setup");

View File

@@ -62,10 +62,7 @@ class MIPI_DSI : public display::Display {
void set_lanes(uint8_t lanes) { this->lanes_ = lanes; }
void set_madctl(uint8_t madctl) { this->madctl_ = madctl; }
void smark_failed(const char *message, esp_err_t err) {
auto str = str_sprintf("Setup failed: %s: %s", message, esp_err_to_name(err));
this->mark_failed(str.c_str());
}
void smark_failed(const char *message, esp_err_t err);
void update() override;

View File

@@ -164,8 +164,8 @@ void MipiRgb::common_setup_() {
if (err == ESP_OK)
err = esp_lcd_panel_init(this->handle_);
if (err != ESP_OK) {
auto msg = str_sprintf("lcd setup failed: %s", esp_err_to_name(err));
this->mark_failed(msg.c_str());
ESP_LOGE(TAG, "lcd setup failed: %s", esp_err_to_name(err));
this->mark_failed("lcd setup failed");
}
ESP_LOGCONFIG(TAG, "MipiRgb setup complete");
}
@@ -350,6 +350,7 @@ void MipiRgb::dump_config() {
"\n Width: %u"
"\n Height: %u"
"\n Rotation: %d degrees"
"\n PCLK Inverted: %s"
"\n HSync Pulse Width: %u"
"\n HSync Back Porch: %u"
"\n HSync Front Porch: %u"
@@ -357,18 +358,18 @@ void MipiRgb::dump_config() {
"\n VSync Back Porch: %u"
"\n VSync Front Porch: %u"
"\n Invert Colors: %s"
"\n Pixel Clock: %dMHz"
"\n Pixel Clock: %uMHz"
"\n Reset Pin: %s"
"\n DE Pin: %s"
"\n PCLK Pin: %s"
"\n HSYNC Pin: %s"
"\n VSYNC Pin: %s",
this->model_, this->width_, this->height_, this->rotation_, this->hsync_pulse_width_,
this->hsync_back_porch_, this->hsync_front_porch_, this->vsync_pulse_width_, this->vsync_back_porch_,
this->vsync_front_porch_, YESNO(this->invert_colors_), this->pclk_frequency_ / 1000000,
get_pin_name(this->reset_pin_).c_str(), get_pin_name(this->de_pin_).c_str(),
get_pin_name(this->pclk_pin_).c_str(), get_pin_name(this->hsync_pin_).c_str(),
get_pin_name(this->vsync_pin_).c_str());
this->model_, this->width_, this->height_, this->rotation_, YESNO(this->pclk_inverted_),
this->hsync_pulse_width_, this->hsync_back_porch_, this->hsync_front_porch_, this->vsync_pulse_width_,
this->vsync_back_porch_, this->vsync_front_porch_, YESNO(this->invert_colors_),
(unsigned) (this->pclk_frequency_ / 1000000), get_pin_name(this->reset_pin_).c_str(),
get_pin_name(this->de_pin_).c_str(), get_pin_name(this->pclk_pin_).c_str(),
get_pin_name(this->hsync_pin_).c_str(), get_pin_name(this->vsync_pin_).c_str());
if (this->madctl_ & MADCTL_BGR) {
this->dump_pins_(8, 13, "Blue", 0);

View File

@@ -11,6 +11,7 @@ st7701s.extend(
vsync_pin=17,
pclk_pin=21,
pclk_frequency="12MHz",
pclk_inverted=False,
pixel_mode="18bit",
mirror_x=True,
mirror_y=True,

View File

@@ -116,7 +116,7 @@ bool MopekaProCheck::parse_device(const esp32_ble_tracker::ESPBTDevice &device)
// Get temperature of sensor
if (this->temperature_ != nullptr) {
uint8_t temp_in_c = this->parse_temperature_(manu_data.data);
int8_t temp_in_c = this->parse_temperature_(manu_data.data);
this->temperature_->publish_state(temp_in_c);
}
@@ -145,7 +145,7 @@ uint32_t MopekaProCheck::parse_distance_(const std::vector<uint8_t> &message) {
(MOPEKA_LPG_COEF[0] + MOPEKA_LPG_COEF[1] * raw_t + MOPEKA_LPG_COEF[2] * raw_t * raw_t));
}
uint8_t MopekaProCheck::parse_temperature_(const std::vector<uint8_t> &message) { return (message[2] & 0x7F) - 40; }
int8_t MopekaProCheck::parse_temperature_(const std::vector<uint8_t> &message) { return (message[2] & 0x7F) - 40; }
SensorReadQuality MopekaProCheck::parse_read_quality_(const std::vector<uint8_t> &message) {
// Since a 8 bit value is being shifted and truncated to 2 bits all possible values are defined as enumeration

View File

@@ -61,7 +61,7 @@ class MopekaProCheck : public Component, public esp32_ble_tracker::ESPBTDeviceLi
uint8_t parse_battery_level_(const std::vector<uint8_t> &message);
uint32_t parse_distance_(const std::vector<uint8_t> &message);
uint8_t parse_temperature_(const std::vector<uint8_t> &message);
int8_t parse_temperature_(const std::vector<uint8_t> &message);
SensorReadQuality parse_read_quality_(const std::vector<uint8_t> &message);
};

View File

@@ -81,7 +81,12 @@ struct IPAddress {
ip_addr_.type = IPADDR_TYPE_V6;
}
#endif /* LWIP_IPV6 */
IPAddress(esp_ip4_addr_t *other_ip) { memcpy((void *) &ip_addr_, (void *) other_ip, sizeof(esp_ip4_addr_t)); }
IPAddress(esp_ip4_addr_t *other_ip) {
memcpy((void *) &ip_addr_, (void *) other_ip, sizeof(esp_ip4_addr_t));
#if LWIP_IPV6
ip_addr_.type = IPADDR_TYPE_V4;
#endif
}
IPAddress(esp_ip_addr_t *other_ip) {
#if LWIP_IPV6
memcpy((void *) &ip_addr_, (void *) other_ip, sizeof(ip_addr_));

View File

@@ -174,6 +174,9 @@ bool Nextion::upload_tft(uint32_t baud_rate, bool exit_reparse) {
// Check if baud rate is supported
this->original_baud_rate_ = this->parent_->get_baud_rate();
if (baud_rate <= 0) {
baud_rate = this->original_baud_rate_;
}
ESP_LOGD(TAG, "Baud rate: %" PRIu32, baud_rate);
// Define the configuration for the HTTP client

View File

@@ -177,6 +177,9 @@ bool Nextion::upload_tft(uint32_t baud_rate, bool exit_reparse) {
// Check if baud rate is supported
this->original_baud_rate_ = this->parent_->get_baud_rate();
if (baud_rate <= 0) {
baud_rate = this->original_baud_rate_;
}
ESP_LOGD(TAG, "Baud rate: %" PRIu32, baud_rate);
// Define the configuration for the HTTP client

View File

@@ -2,6 +2,7 @@
#ifdef USE_ONLINE_IMAGE_PNG_SUPPORT
#include "esphome/components/display/display_buffer.h"
#include "esphome/core/application.h"
#include "esphome/core/helpers.h"
#include "esphome/core/log.h"
@@ -38,6 +39,14 @@ static void draw_callback(pngle_t *pngle, uint32_t x, uint32_t y, uint32_t w, ui
PngDecoder *decoder = (PngDecoder *) pngle_get_user_data(pngle);
Color color(rgba[0], rgba[1], rgba[2], rgba[3]);
decoder->draw(x, y, w, h, color);
// Feed watchdog periodically to avoid triggering during long decode operations.
// Feed every 1024 pixels to balance efficiency and responsiveness.
uint32_t pixels = w * h;
decoder->increment_pixels_decoded(pixels);
if ((decoder->get_pixels_decoded() % 1024) < pixels) {
App.feed_wdt();
}
}
PngDecoder::PngDecoder(OnlineImage *image) : ImageDecoder(image) {

View File

@@ -25,9 +25,13 @@ class PngDecoder : public ImageDecoder {
int prepare(size_t download_size) override;
int HOT decode(uint8_t *buffer, size_t size) override;
void increment_pixels_decoded(uint32_t count) { this->pixels_decoded_ += count; }
uint32_t get_pixels_decoded() const { return this->pixels_decoded_; }
protected:
RAMAllocator<pngle_t> allocator_;
pngle_t *pngle_;
uint32_t pixels_decoded_{0};
};
} // namespace online_image

View File

@@ -195,8 +195,8 @@ static void add(std::vector<uint8_t> &vec, const char *str) {
void PacketTransport::setup() {
this->name_ = App.get_name().c_str();
if (strlen(this->name_) > 255) {
this->mark_failed();
this->status_set_error("Device name exceeds 255 chars");
this->mark_failed();
return;
}
this->resend_ping_key_ = this->ping_pong_enable_;

View File

@@ -6,10 +6,13 @@
# in schema.py file in this directory.
from esphome import pins
import esphome.codegen as cg
from esphome.components import libretiny
from esphome.components.libretiny.const import (
COMPONENT_RTL87XX,
FAMILY_RTL8710B,
KEY_COMPONENT_DATA,
KEY_FAMILY,
KEY_LIBRETINY,
LibreTinyComponent,
)
@@ -45,6 +48,11 @@ CONFIG_SCHEMA.prepend_extra(_set_core_data)
async def to_code(config):
# Use FreeRTOS 8.2.3+ for xTaskNotifyGive/ulTaskNotifyTake required by AsyncTCP 3.4.3+
# https://github.com/esphome/esphome/issues/10220
# Only for RTL8710B (ambz) - RTL8720C (ambz2) requires FreeRTOS 10.x
if CORE.data[KEY_LIBRETINY][KEY_FAMILY] == FAMILY_RTL8710B:
cg.add_platformio_option("custom_versions.freertos", "8.2.3")
return await libretiny.component_to_code(config)

View File

@@ -1,8 +1,8 @@
#pragma once
#include <list>
#include <memory>
#include <tuple>
#include <forward_list>
#include "esphome/core/automation.h"
#include "esphome/core/component.h"
#include "esphome/core/helpers.h"
@@ -278,7 +278,12 @@ template<class C, typename... Ts> class ScriptWaitAction : public Action<Ts...>,
void setup() override {
// Start with loop disabled - only enable when there's work to do
this->disable_loop();
// IMPORTANT: Only disable if num_running_ is 0, otherwise play_complex() was already
// called before our setup() (e.g., from on_boot trigger at same priority level)
// and we must not undo its enable_loop() call
if (this->num_running_ == 0) {
this->disable_loop();
}
}
void play_complex(const Ts &...x) override {
@@ -290,10 +295,10 @@ template<class C, typename... Ts> class ScriptWaitAction : public Action<Ts...>,
}
// Store parameters for later execution
this->param_queue_.emplace_front(x...);
// Enable loop now that we have work to do
this->param_queue_.emplace_back(x...);
// Enable loop now that we have work to do - don't call loop() synchronously!
// Let the event loop call it to avoid reentrancy issues
this->enable_loop();
this->loop();
}
void loop() override {
@@ -303,13 +308,17 @@ template<class C, typename... Ts> class ScriptWaitAction : public Action<Ts...>,
if (this->script_->is_running())
return;
while (!this->param_queue_.empty()) {
// Only process ONE queued item per loop iteration
// Processing all items in a while loop causes infinite loops because
// play_next_() can trigger more items to be queued
if (!this->param_queue_.empty()) {
auto &params = this->param_queue_.front();
this->play_next_tuple_(params, typename gens<sizeof...(Ts)>::type());
this->param_queue_.pop_front();
} else {
// Queue is now empty - disable loop until next play_complex
this->disable_loop();
}
// Queue is now empty - disable loop until next play_complex
this->disable_loop();
}
void play(const Ts &...x) override { /* ignore - see play_complex */
@@ -326,7 +335,7 @@ template<class C, typename... Ts> class ScriptWaitAction : public Action<Ts...>,
}
C *script_;
std::forward_list<std::tuple<Ts...>> param_queue_;
std::list<std::tuple<Ts...>> param_queue_;
};
} // namespace script

View File

@@ -73,17 +73,17 @@ void SFA30Component::update() {
}
if (this->formaldehyde_sensor_ != nullptr) {
const float formaldehyde = raw_data[0] / 5.0f;
const float formaldehyde = static_cast<int16_t>(raw_data[0]) / 5.0f;
this->formaldehyde_sensor_->publish_state(formaldehyde);
}
if (this->humidity_sensor_ != nullptr) {
const float humidity = raw_data[1] / 100.0f;
const float humidity = static_cast<int16_t>(raw_data[1]) / 100.0f;
this->humidity_sensor_->publish_state(humidity);
}
if (this->temperature_sensor_ != nullptr) {
const float temperature = raw_data[2] / 200.0f;
const float temperature = static_cast<int16_t>(raw_data[2]) / 200.0f;
this->temperature_sensor_->publish_state(temperature);
}

View File

@@ -1,9 +1,14 @@
import logging
import esphome.codegen as cg
from esphome.components import time as time_
from esphome.config_helpers import merge_config
import esphome.config_validation as cv
from esphome.const import (
CONF_ID,
CONF_PLATFORM,
CONF_SERVERS,
CONF_TIME,
PLATFORM_BK72XX,
PLATFORM_ESP32,
PLATFORM_ESP8266,
@@ -12,13 +17,74 @@ from esphome.const import (
PLATFORM_RTL87XX,
)
from esphome.core import CORE
import esphome.final_validate as fv
from esphome.types import ConfigType
_LOGGER = logging.getLogger(__name__)
DEPENDENCIES = ["network"]
CONF_SNTP = "sntp"
sntp_ns = cg.esphome_ns.namespace("sntp")
SNTPComponent = sntp_ns.class_("SNTPComponent", time_.RealTimeClock)
DEFAULT_SERVERS = ["0.pool.ntp.org", "1.pool.ntp.org", "2.pool.ntp.org"]
def _sntp_final_validate(config: ConfigType) -> None:
"""Merge multiple SNTP instances into one, similar to OTA merging behavior."""
full_conf = fv.full_config.get()
time_confs = full_conf.get(CONF_TIME, [])
sntp_configs: list[ConfigType] = []
other_time_configs: list[ConfigType] = []
for time_conf in time_confs:
if time_conf.get(CONF_PLATFORM) == CONF_SNTP:
sntp_configs.append(time_conf)
else:
other_time_configs.append(time_conf)
if len(sntp_configs) <= 1:
return
# Merge all SNTP configs into the first one
merged = sntp_configs[0]
for sntp_conf in sntp_configs[1:]:
# Validate that IDs are consistent if manually specified
if merged[CONF_ID].is_manual and sntp_conf[CONF_ID].is_manual:
raise cv.Invalid(
f"Found multiple SNTP configurations but {CONF_ID} is inconsistent"
)
merged = merge_config(merged, sntp_conf)
# Deduplicate servers while preserving order
servers = merged[CONF_SERVERS]
unique_servers = list(dict.fromkeys(servers))
# Warn if we're dropping servers due to 3-server limit
if len(unique_servers) > 3:
dropped = unique_servers[3:]
unique_servers = unique_servers[:3]
_LOGGER.warning(
"SNTP supports maximum 3 servers. Dropped excess server(s): %s",
dropped,
)
merged[CONF_SERVERS] = unique_servers
_LOGGER.warning(
"Found and merged %d SNTP time configurations into one instance",
len(sntp_configs),
)
# Replace time configs with merged SNTP + other time platforms
other_time_configs.append(merged)
full_conf[CONF_TIME] = other_time_configs
fv.full_config.set(full_conf)
CONFIG_SCHEMA = cv.All(
time_.TIME_SCHEMA.extend(
{
@@ -40,6 +106,8 @@ CONFIG_SCHEMA = cv.All(
),
)
FINAL_VALIDATE_SCHEMA = _sntp_final_validate
async def to_code(config):
servers = config[CONF_SERVERS]

View File

@@ -66,10 +66,14 @@ SubstituteFilter::SubstituteFilter(const std::initializer_list<Substitution> &su
: substitutions_(substitutions) {}
optional<std::string> SubstituteFilter::new_value(std::string value) {
std::size_t pos;
for (const auto &sub : this->substitutions_) {
while ((pos = value.find(sub.from)) != std::string::npos)
std::size_t pos = 0;
while ((pos = value.find(sub.from, pos)) != std::string::npos) {
value.replace(pos, sub.from.size(), sub.to);
// Advance past the replacement to avoid infinite loop when
// the replacement contains the search pattern (e.g., f -> foo)
pos += sub.to.size();
}
}
return value;
}

View File

@@ -1,3 +1,4 @@
from logging import getLogger
import math
import re
@@ -35,6 +36,8 @@ from esphome.core import CORE, ID
import esphome.final_validate as fv
from esphome.yaml_util import make_data_base
_LOGGER = getLogger(__name__)
CODEOWNERS = ["@esphome/core"]
uart_ns = cg.esphome_ns.namespace("uart")
UARTComponent = uart_ns.class_("UARTComponent")
@@ -130,6 +133,21 @@ def validate_host_config(config):
return config
def validate_rx_buffer_size(config):
if CORE.is_esp32:
# ESP32 UART hardware FIFO is 128 bytes (LP UART is 16 bytes, but we use 128 as safe minimum)
# rx_buffer_size must be greater than the hardware FIFO length
min_buffer_size = 128
if config[CONF_RX_BUFFER_SIZE] <= min_buffer_size:
_LOGGER.warning(
"UART rx_buffer_size (%d bytes) is too small and must be greater than the hardware "
"FIFO size (%d bytes). The buffer size will be automatically adjusted at runtime.",
config[CONF_RX_BUFFER_SIZE],
min_buffer_size,
)
return config
def _uart_declare_type(value):
if CORE.is_esp8266:
return cv.declare_id(ESP8266UartComponent)(value)
@@ -247,6 +265,7 @@ CONFIG_SCHEMA = cv.All(
).extend(cv.COMPONENT_SCHEMA),
cv.has_at_least_one_key(CONF_TX_PIN, CONF_RX_PIN, CONF_PORT),
validate_host_config,
validate_rx_buffer_size,
)

View File

@@ -56,11 +56,19 @@ uint32_t ESP8266UartComponent::get_config() {
}
void ESP8266UartComponent::setup() {
if (this->rx_pin_) {
this->rx_pin_->setup();
}
if (this->tx_pin_ && this->rx_pin_ != this->tx_pin_) {
this->tx_pin_->setup();
auto setup_pin_if_needed = [](InternalGPIOPin *pin) {
if (!pin) {
return;
}
const auto mask = gpio::Flags::FLAG_OPEN_DRAIN | gpio::Flags::FLAG_PULLUP | gpio::Flags::FLAG_PULLDOWN;
if ((pin->get_flags() & mask) != gpio::Flags::FLAG_NONE) {
pin->setup();
}
};
setup_pin_if_needed(this->rx_pin_);
if (this->rx_pin_ != this->tx_pin_) {
setup_pin_if_needed(this->tx_pin_);
}
// Use Arduino HardwareSerial UARTs if all used pins match the ones

View File

@@ -91,6 +91,16 @@ void IDFUARTComponent::setup() {
this->uart_num_ = static_cast<uart_port_t>(next_uart_num++);
this->lock_ = xSemaphoreCreateMutex();
#if (SOC_UART_LP_NUM >= 1)
size_t fifo_len = ((this->uart_num_ < SOC_UART_HP_NUM) ? SOC_UART_FIFO_LEN : SOC_LP_UART_FIFO_LEN);
#else
size_t fifo_len = SOC_UART_FIFO_LEN;
#endif
if (this->rx_buffer_size_ <= fifo_len) {
ESP_LOGW(TAG, "rx_buffer_size is too small, must be greater than %zu", fifo_len);
this->rx_buffer_size_ = fifo_len * 2;
}
xSemaphoreTake(this->lock_, portMAX_DELAY);
this->load_settings(false);
@@ -123,11 +133,19 @@ void IDFUARTComponent::load_settings(bool dump_config) {
return;
}
if (this->rx_pin_) {
this->rx_pin_->setup();
}
if (this->tx_pin_ && this->rx_pin_ != this->tx_pin_) {
this->tx_pin_->setup();
auto setup_pin_if_needed = [](InternalGPIOPin *pin) {
if (!pin) {
return;
}
const auto mask = gpio::Flags::FLAG_OPEN_DRAIN | gpio::Flags::FLAG_PULLUP | gpio::Flags::FLAG_PULLDOWN;
if ((pin->get_flags() & mask) != gpio::Flags::FLAG_NONE) {
pin->setup();
}
};
setup_pin_if_needed(this->rx_pin_);
if (this->rx_pin_ != this->tx_pin_) {
setup_pin_if_needed(this->tx_pin_);
}
int8_t tx = this->tx_pin_ != nullptr ? this->tx_pin_->get_pin() : -1;
@@ -237,8 +255,12 @@ void IDFUARTComponent::set_rx_timeout(size_t rx_timeout) {
void IDFUARTComponent::write_array(const uint8_t *data, size_t len) {
xSemaphoreTake(this->lock_, portMAX_DELAY);
uart_write_bytes(this->uart_num_, data, len);
int32_t write_len = uart_write_bytes(this->uart_num_, data, len);
xSemaphoreGive(this->lock_);
if (write_len != (int32_t) len) {
ESP_LOGW(TAG, "uart_write_bytes failed: %d != %zu", write_len, len);
this->mark_failed();
}
#ifdef USE_UART_DEBUGGER
for (size_t i = 0; i < len; i++) {
this->debug_callback_.call(UART_DIRECTION_TX, data[i]);
@@ -267,6 +289,7 @@ bool IDFUARTComponent::peek_byte(uint8_t *data) {
bool IDFUARTComponent::read_array(uint8_t *data, size_t len) {
size_t length_to_read = len;
int32_t read_len = 0;
if (!this->check_read_timeout_(len))
return false;
xSemaphoreTake(this->lock_, portMAX_DELAY);
@@ -277,25 +300,31 @@ bool IDFUARTComponent::read_array(uint8_t *data, size_t len) {
this->has_peek_ = false;
}
if (length_to_read > 0)
uart_read_bytes(this->uart_num_, data, length_to_read, 20 / portTICK_PERIOD_MS);
read_len = uart_read_bytes(this->uart_num_, data, length_to_read, 20 / portTICK_PERIOD_MS);
xSemaphoreGive(this->lock_);
#ifdef USE_UART_DEBUGGER
for (size_t i = 0; i < len; i++) {
this->debug_callback_.call(UART_DIRECTION_RX, data[i]);
}
#endif
return true;
return read_len == (int32_t) length_to_read;
}
int IDFUARTComponent::available() {
size_t available;
size_t available = 0;
esp_err_t err;
xSemaphoreTake(this->lock_, portMAX_DELAY);
uart_get_buffered_data_len(this->uart_num_, &available);
if (this->has_peek_)
available++;
err = uart_get_buffered_data_len(this->uart_num_, &available);
xSemaphoreGive(this->lock_);
if (err != ESP_OK) {
ESP_LOGW(TAG, "uart_get_buffered_data_len failed: %s", esp_err_to_name(err));
this->mark_failed();
}
if (this->has_peek_) {
available++;
}
return available;
}

View File

@@ -53,7 +53,7 @@ void LibreTinyUARTComponent::setup() {
auto shouldFallbackToSoftwareSerial = [&]() -> bool {
auto hasFlags = [](InternalGPIOPin *pin, const gpio::Flags mask) -> bool {
return pin && pin->get_flags() & mask != gpio::Flags::FLAG_NONE;
return pin && (pin->get_flags() & mask) != gpio::Flags::FLAG_NONE;
};
if (hasFlags(this->tx_pin_, gpio::Flags::FLAG_OPEN_DRAIN | gpio::Flags::FLAG_PULLUP | gpio::Flags::FLAG_PULLDOWN) ||
hasFlags(this->rx_pin_, gpio::Flags::FLAG_OPEN_DRAIN | gpio::Flags::FLAG_PULLUP | gpio::Flags::FLAG_PULLDOWN)) {

View File

@@ -52,11 +52,19 @@ uint16_t RP2040UartComponent::get_config() {
}
void RP2040UartComponent::setup() {
if (this->rx_pin_) {
this->rx_pin_->setup();
}
if (this->tx_pin_ && this->rx_pin_ != this->tx_pin_) {
this->tx_pin_->setup();
auto setup_pin_if_needed = [](InternalGPIOPin *pin) {
if (!pin) {
return;
}
const auto mask = gpio::Flags::FLAG_OPEN_DRAIN | gpio::Flags::FLAG_PULLUP | gpio::Flags::FLAG_PULLDOWN;
if ((pin->get_flags() & mask) != gpio::Flags::FLAG_NONE) {
pin->setup();
}
};
setup_pin_if_needed(this->rx_pin_);
if (this->rx_pin_ != this->tx_pin_) {
setup_pin_if_needed(this->tx_pin_);
}
uint16_t config = get_config();

View File

@@ -21,8 +21,8 @@ void UDPComponent::setup() {
if (this->should_broadcast_) {
this->broadcast_socket_ = socket::socket(AF_INET, SOCK_DGRAM, IPPROTO_IP);
if (this->broadcast_socket_ == nullptr) {
this->mark_failed();
this->status_set_error("Could not create socket");
this->mark_failed();
return;
}
int enable = 1;
@@ -41,15 +41,15 @@ void UDPComponent::setup() {
if (this->should_listen_) {
this->listen_socket_ = socket::socket(AF_INET, SOCK_DGRAM, IPPROTO_IP);
if (this->listen_socket_ == nullptr) {
this->mark_failed();
this->status_set_error("Could not create socket");
this->mark_failed();
return;
}
auto err = this->listen_socket_->setblocking(false);
if (err < 0) {
ESP_LOGE(TAG, "Unable to set nonblocking: errno %d", errno);
this->mark_failed();
this->status_set_error("Unable to set nonblocking");
this->mark_failed();
return;
}
int enable = 1;
@@ -73,8 +73,8 @@ void UDPComponent::setup() {
err = this->listen_socket_->setsockopt(IPPROTO_IP, IP_ADD_MEMBERSHIP, &imreq, sizeof(imreq));
if (err < 0) {
ESP_LOGE(TAG, "Failed to set IP_ADD_MEMBERSHIP. Error %d", errno);
this->mark_failed();
this->status_set_error("Failed to set IP_ADD_MEMBERSHIP");
this->mark_failed();
return;
}
}
@@ -82,8 +82,8 @@ void UDPComponent::setup() {
err = this->listen_socket_->bind((struct sockaddr *) &server, sizeof(server));
if (err != 0) {
ESP_LOGE(TAG, "Socket unable to bind: errno %d", errno);
this->mark_failed();
this->status_set_error("Unable to bind socket");
this->mark_failed();
return;
}
}

View File

@@ -1,4 +1,5 @@
import esphome.codegen as cg
from esphome.components import socket
from esphome.components.uart import (
CONF_DATA_BITS,
CONF_PARITY,
@@ -17,7 +18,7 @@ from esphome.const import (
)
from esphome.cpp_types import Component
AUTO_LOAD = ["uart", "usb_host", "bytebuffer"]
AUTO_LOAD = ["uart", "usb_host", "bytebuffer", "socket"]
CODEOWNERS = ["@clydebarrow"]
usb_uart_ns = cg.esphome_ns.namespace("usb_uart")
@@ -116,6 +117,10 @@ CONFIG_SCHEMA = cv.ensure_list(
async def to_code(config):
# Enable wake_loop_threadsafe for low-latency USB data processing
# The USB task queues data events that need immediate processing
socket.require_wake_loop_threadsafe()
for device in config:
var = await register_usb_client(device)
for index, channel in enumerate(device[CONF_CHANNELS]):

View File

@@ -2,6 +2,7 @@
#if defined(USE_ESP32_VARIANT_ESP32S2) || defined(USE_ESP32_VARIANT_ESP32S3) || defined(USE_ESP32_VARIANT_ESP32P4)
#include "usb_uart.h"
#include "esphome/core/log.h"
#include "esphome/core/application.h"
#include "esphome/components/uart/uart_debugger.h"
#include <cinttypes>
@@ -262,6 +263,11 @@ void USBUartComponent::start_input(USBUartChannel *channel) {
// Push to lock-free queue for main loop processing
// Push always succeeds because pool size == queue size
this->usb_data_queue_.push(chunk);
// Wake main loop immediately to process USB data instead of waiting for select() timeout
#if defined(USE_SOCKET_SELECT_SUPPORT) && defined(USE_WAKE_LOOP_THREADSAFE)
App.wake_loop_threadsafe();
#endif
}
// On success, restart input immediately from USB task for performance

View File

@@ -67,8 +67,8 @@ void WakeOnLanButton::setup() {
#if defined(USE_SOCKET_IMPL_BSD_SOCKETS) || defined(USE_SOCKET_IMPL_LWIP_SOCKETS)
this->broadcast_socket_ = socket::socket(AF_INET, SOCK_DGRAM, IPPROTO_IP);
if (this->broadcast_socket_ == nullptr) {
this->mark_failed();
this->status_set_error("Could not create socket");
this->mark_failed();
return;
}
int enable = 1;

View File

@@ -1,10 +1,17 @@
import logging
import esphome.codegen as cg
from esphome.components.esp32 import add_idf_component
from esphome.components.ota import BASE_OTA_SCHEMA, OTAComponent, ota_to_code
from esphome.config_helpers import merge_config
import esphome.config_validation as cv
from esphome.const import CONF_ID
from esphome.const import CONF_ID, CONF_OTA, CONF_PLATFORM, CONF_WEB_SERVER
from esphome.core import CORE, coroutine_with_priority
from esphome.coroutine import CoroPriority
import esphome.final_validate as fv
from esphome.types import ConfigType
_LOGGER = logging.getLogger(__name__)
CODEOWNERS = ["@esphome/core"]
DEPENDENCIES = ["network", "web_server_base"]
@@ -12,6 +19,53 @@ DEPENDENCIES = ["network", "web_server_base"]
web_server_ns = cg.esphome_ns.namespace("web_server")
WebServerOTAComponent = web_server_ns.class_("WebServerOTAComponent", OTAComponent)
def _web_server_ota_final_validate(config: ConfigType) -> None:
"""Merge multiple web_server OTA instances into one.
Multiple web_server OTA instances register duplicate HTTP handlers for /update,
causing undefined behavior. Merge them into a single instance.
"""
full_conf = fv.full_config.get()
ota_confs = full_conf.get(CONF_OTA, [])
web_server_ota_configs: list[ConfigType] = []
other_ota_configs: list[ConfigType] = []
for ota_conf in ota_confs:
if ota_conf.get(CONF_PLATFORM) == CONF_WEB_SERVER:
web_server_ota_configs.append(ota_conf)
else:
other_ota_configs.append(ota_conf)
if len(web_server_ota_configs) <= 1:
return
# Merge all web_server OTA configs into the first one
merged = web_server_ota_configs[0]
for ota_conf in web_server_ota_configs[1:]:
# Validate that IDs are consistent if manually specified
if (
merged[CONF_ID].is_manual
and ota_conf[CONF_ID].is_manual
and merged[CONF_ID] != ota_conf[CONF_ID]
):
raise cv.Invalid(
f"Found multiple web_server OTA configurations but {CONF_ID} is inconsistent"
)
merged = merge_config(merged, ota_conf)
_LOGGER.warning(
"Found and merged %d web_server OTA configurations into one instance",
len(web_server_ota_configs),
)
# Replace OTA configs with merged web_server + other OTA platforms
other_ota_configs.append(merged)
full_conf[CONF_OTA] = other_ota_configs
fv.full_config.set(full_conf)
CONFIG_SCHEMA = (
cv.Schema(
{
@@ -22,6 +76,8 @@ CONFIG_SCHEMA = (
.extend(cv.COMPONENT_SCHEMA)
)
FINAL_VALIDATE_SCHEMA = _web_server_ota_final_validate
@coroutine_with_priority(CoroPriority.WEB_SERVER_OTA)
async def to_code(config):

View File

@@ -87,6 +87,29 @@ int nonblocking_send(httpd_handle_t hd, int sockfd, const char *buf, size_t buf_
}
} // namespace
void AsyncWebServer::safe_close_with_shutdown(httpd_handle_t hd, int sockfd) {
// CRITICAL: Shut down receive BEFORE closing to prevent lwIP race conditions
//
// The race condition occurs because close() initiates lwIP teardown while
// the TCP/IP thread can still receive packets, causing assertions when
// recv_tcp() sees partially-torn-down state.
//
// By shutting down receive first, we tell lwIP to stop accepting new data BEFORE
// the teardown begins, eliminating the race window. We only shutdown RD (not RDWR)
// to allow the FIN packet to be sent cleanly during close().
//
// Note: This function may be called with an already-closed socket if the network
// stack closed it. In that case, shutdown() will fail but close() is safe to call.
//
// See: https://github.com/esphome/esphome-webserver/issues/163
// Attempt shutdown - ignore errors as socket may already be closed
shutdown(sockfd, SHUT_RD);
// Always close - safe even if socket is already closed by network stack
close(sockfd);
}
void AsyncWebServer::end() {
if (this->server_) {
httpd_stop(this->server_);
@@ -94,6 +117,18 @@ void AsyncWebServer::end() {
}
}
void AsyncWebServer::set_lru_purge_enable(bool enable) {
if (this->lru_purge_enable_ == enable) {
return; // No change needed
}
this->lru_purge_enable_ = enable;
// If server is already running, restart it with new config
if (this->server_) {
this->end();
this->begin();
}
}
void AsyncWebServer::begin() {
if (this->server_) {
this->end();
@@ -101,6 +136,10 @@ void AsyncWebServer::begin() {
httpd_config_t config = HTTPD_DEFAULT_CONFIG();
config.server_port = this->port_;
config.uri_match_fn = [](const char * /*unused*/, const char * /*unused*/, size_t /*unused*/) { return true; };
// Enable LRU purging if requested (e.g., by captive portal to handle probe bursts)
config.lru_purge_enable = this->lru_purge_enable_;
// Use custom close function that shuts down before closing to prevent lwIP race conditions
config.close_fn = AsyncWebServer::safe_close_with_shutdown;
if (httpd_start(&this->server_, &config) == ESP_OK) {
const httpd_uri_t handler_get = {
.uri = "",
@@ -242,6 +281,7 @@ void AsyncWebServerRequest::send(int code, const char *content_type, const char
void AsyncWebServerRequest::redirect(const std::string &url) {
httpd_resp_set_status(*this, "302 Found");
httpd_resp_set_hdr(*this, "Location", url.c_str());
httpd_resp_set_hdr(*this, "Connection", "close");
httpd_resp_send(*this, nullptr, 0);
}
@@ -489,10 +529,12 @@ AsyncEventSourceResponse::AsyncEventSourceResponse(const AsyncWebServerRequest *
void AsyncEventSourceResponse::destroy(void *ptr) {
auto *rsp = static_cast<AsyncEventSourceResponse *>(ptr);
ESP_LOGD(TAG, "Event source connection closed (fd: %d)", rsp->fd_.load());
// Mark as dead by setting fd to 0 - will be cleaned up in the main loop
rsp->fd_.store(0);
int fd = rsp->fd_.exchange(0); // Atomically get and clear fd
ESP_LOGD(TAG, "Event source connection closed (fd: %d)", fd);
// Mark as dead - will be cleaned up in the main loop
// Note: We don't delete or remove from set here to avoid race conditions
// httpd will call our custom close_fn (safe_close_with_shutdown) which handles
// shutdown() before close() to prevent lwIP race conditions
}
// helper for allowing only unique entries in the queue

View File

@@ -199,12 +199,17 @@ class AsyncWebServer {
return *handler;
}
void set_lru_purge_enable(bool enable);
httpd_handle_t get_server() { return this->server_; }
protected:
uint16_t port_{};
httpd_handle_t server_{};
bool lru_purge_enable_{false};
static esp_err_t request_handler(httpd_req_t *r);
static esp_err_t request_post_handler(httpd_req_t *r);
esp_err_t request_handler_(AsyncWebServerRequest *request) const;
static void safe_close_with_shutdown(httpd_handle_t hd, int sockfd);
#ifdef USE_WEBSERVER_OTA
esp_err_t handle_multipart_upload_(httpd_req_t *r, const char *content_type);
#endif

View File

@@ -12,7 +12,6 @@ from esphome.components.network import (
from esphome.components.psram import is_guaranteed as psram_is_guaranteed
from esphome.config_helpers import filter_source_files_from_platform
import esphome.config_validation as cv
from esphome.config_validation import only_with_esp_idf
from esphome.const import (
CONF_AP,
CONF_BSSID,
@@ -70,6 +69,12 @@ CONF_MIN_AUTH_MODE = "min_auth_mode"
# Limited to 127 because selected_sta_index_ is int8_t in C++
MAX_WIFI_NETWORKS = 127
# Default AP timeout - allows sufficient time to try all BSSIDs during initial connection
# After AP starts, WiFi scanning is skipped to avoid disrupting the AP, so we only
# get best-effort connection attempts. Longer timeout ensures we exhaust all options
# before falling back to AP mode. Aligned with improv wifi_timeout default.
DEFAULT_AP_TIMEOUT = "90s"
wifi_ns = cg.esphome_ns.namespace("wifi")
EAPAuth = wifi_ns.struct("EAPAuth")
ManualIP = wifi_ns.struct("ManualIP")
@@ -178,7 +183,7 @@ CONF_AP_TIMEOUT = "ap_timeout"
WIFI_NETWORK_AP = WIFI_NETWORK_BASE.extend(
{
cv.Optional(
CONF_AP_TIMEOUT, default="1min"
CONF_AP_TIMEOUT, default=DEFAULT_AP_TIMEOUT
): cv.positive_time_period_milliseconds,
}
)
@@ -352,7 +357,7 @@ CONFIG_SCHEMA = cv.All(
single=True
),
cv.Optional(CONF_USE_PSRAM): cv.All(
only_with_esp_idf, cv.requires_component("psram"), cv.boolean
cv.only_on_esp32, cv.requires_component("psram"), cv.boolean
),
}
),

View File

@@ -199,7 +199,12 @@ static constexpr uint8_t WIFI_RETRY_COUNT_PER_AP = 1;
/// Cooldown duration in milliseconds after adapter restart or repeated failures
/// Allows WiFi hardware to stabilize before next connection attempt
static constexpr uint32_t WIFI_COOLDOWN_DURATION_MS = 1000;
static constexpr uint32_t WIFI_COOLDOWN_DURATION_MS = 500;
/// Cooldown duration when fallback AP is active and captive portal may be running
/// Longer interval gives users time to configure WiFi without constant connection attempts
/// While connecting, WiFi can't beacon the AP properly, so needs longer cooldown
static constexpr uint32_t WIFI_COOLDOWN_WITH_AP_ACTIVE_MS = 30000;
static constexpr uint8_t get_max_retries_for_phase(WiFiRetryPhase phase) {
switch (phase) {
@@ -275,7 +280,9 @@ int8_t WiFiComponent::find_next_hidden_sta_(int8_t start_index) {
}
}
if (!this->ssid_was_seen_in_scan_(sta.get_ssid())) {
// If we didn't scan this cycle, treat all networks as potentially hidden
// Otherwise, only retry networks that weren't seen in the scan
if (!this->did_scan_this_cycle_ || !this->ssid_was_seen_in_scan_(sta.get_ssid())) {
ESP_LOGD(TAG, "Hidden candidate " LOG_SECRET("'%s'") " at index %d", sta.get_ssid().c_str(), static_cast<int>(i));
return static_cast<int8_t>(i);
}
@@ -417,10 +424,6 @@ void WiFiComponent::start() {
void WiFiComponent::restart_adapter() {
ESP_LOGW(TAG, "Restarting adapter");
this->wifi_mode_(false, {});
// Enter cooldown state to allow WiFi hardware to stabilize after restart
// Don't set retry_phase_ or num_retried_ here - state machine handles transitions
this->state_ = WIFI_COMPONENT_STATE_COOLDOWN;
this->action_started_ = millis();
this->error_from_callback_ = false;
}
@@ -441,7 +444,16 @@ void WiFiComponent::loop() {
switch (this->state_) {
case WIFI_COMPONENT_STATE_COOLDOWN: {
this->status_set_warning(LOG_STR("waiting to reconnect"));
if (now - this->action_started_ > WIFI_COOLDOWN_DURATION_MS) {
// Skip cooldown if new credentials were provided while connecting
if (this->skip_cooldown_next_cycle_) {
this->skip_cooldown_next_cycle_ = false;
this->check_connecting_finished();
break;
}
// Use longer cooldown when captive portal/improv is active to avoid disrupting user config
bool portal_active = this->is_captive_portal_active_() || this->is_esp32_improv_active_();
uint32_t cooldown_duration = portal_active ? WIFI_COOLDOWN_WITH_AP_ACTIVE_MS : WIFI_COOLDOWN_DURATION_MS;
if (now - this->action_started_ > cooldown_duration) {
// After cooldown we either restarted the adapter because of
// a failure, or something tried to connect over and over
// so we entered cooldown. In both cases we call
@@ -495,7 +507,8 @@ void WiFiComponent::loop() {
#endif // USE_WIFI_AP
#ifdef USE_IMPROV
if (esp32_improv::global_improv_component != nullptr && !esp32_improv::global_improv_component->is_active()) {
if (esp32_improv::global_improv_component != nullptr && !esp32_improv::global_improv_component->is_active() &&
!esp32_improv::global_improv_component->should_start()) {
if (now - this->last_connected_ > esp32_improv::global_improv_component->get_wifi_timeout()) {
if (this->wifi_mode_(true, {}))
esp32_improv::global_improv_component->start();
@@ -605,6 +618,8 @@ void WiFiComponent::set_sta(const WiFiAP &ap) {
this->init_sta(1);
this->add_sta(ap);
this->selected_sta_index_ = 0;
// When new credentials are set (e.g., from improv), skip cooldown to retry immediately
this->skip_cooldown_next_cycle_ = true;
}
WiFiAP WiFiComponent::build_params_for_current_phase_() {
@@ -666,6 +681,17 @@ void WiFiComponent::save_wifi_sta(const std::string &ssid, const std::string &pa
sta.set_ssid(ssid);
sta.set_password(password);
this->set_sta(sta);
// Trigger connection attempt (exits cooldown if needed, no-op if already connecting/connected)
this->connect_soon_();
}
void WiFiComponent::connect_soon_() {
// Only trigger retry if we're in cooldown - if already connecting/connected, do nothing
if (this->state_ == WIFI_COMPONENT_STATE_COOLDOWN) {
ESP_LOGD(TAG, "Exiting cooldown early due to new WiFi credentials");
this->retry_connect();
}
}
void WiFiComponent::start_connecting(const WiFiAP &ap) {
@@ -961,6 +987,7 @@ void WiFiComponent::check_scanning_finished() {
return;
}
this->scan_done_ = false;
this->did_scan_this_cycle_ = true;
if (this->scan_result_.empty()) {
ESP_LOGW(TAG, "No networks found");
@@ -1182,8 +1209,8 @@ WiFiRetryPhase WiFiComponent::determine_next_phase_() {
}
case WiFiRetryPhase::SCAN_CONNECTING:
// If scan found no matching networks, skip to hidden network mode
if (!this->scan_result_.empty() && !this->scan_result_[0].get_matches()) {
// If scan found no networks or no matching networks, skip to hidden network mode
if (this->scan_result_.empty() || !this->scan_result_[0].get_matches()) {
return WiFiRetryPhase::RETRY_HIDDEN;
}
@@ -1227,9 +1254,16 @@ WiFiRetryPhase WiFiComponent::determine_next_phase_() {
return WiFiRetryPhase::RESTARTING_ADAPTER;
case WiFiRetryPhase::RESTARTING_ADAPTER:
// After restart, go back to explicit hidden if we went through it initially, otherwise scan
return this->went_through_explicit_hidden_phase_() ? WiFiRetryPhase::EXPLICIT_HIDDEN
: WiFiRetryPhase::SCAN_CONNECTING;
// After restart, go back to explicit hidden if we went through it initially
if (this->went_through_explicit_hidden_phase_()) {
return WiFiRetryPhase::EXPLICIT_HIDDEN;
}
// Skip scanning when captive portal/improv is active to avoid disrupting AP
// Even passive scans can cause brief AP disconnections on ESP32
if (this->is_captive_portal_active_() || this->is_esp32_improv_active_()) {
return WiFiRetryPhase::RETRY_HIDDEN;
}
return WiFiRetryPhase::SCAN_CONNECTING;
}
// Should never reach here
@@ -1317,6 +1351,12 @@ bool WiFiComponent::transition_to_phase_(WiFiRetryPhase new_phase) {
if (!this->is_captive_portal_active_() && !this->is_esp32_improv_active_()) {
this->restart_adapter();
}
// Clear scan flag - we're starting a new retry cycle
this->did_scan_this_cycle_ = false;
// Always enter cooldown after restart (or skip-restart) to allow stabilization
// Use extended cooldown when AP is active to avoid constant scanning that blocks DNS
this->state_ = WIFI_COMPONENT_STATE_COOLDOWN;
this->action_started_ = millis();
// Return true to indicate we should wait (go to COOLDOWN) instead of immediately connecting
return true;
@@ -1515,6 +1555,20 @@ void WiFiComponent::retry_connect() {
}
}
#ifdef USE_RP2040
// RP2040's mDNS library (LEAmDNS) relies on LwipIntf::stateUpCB() to restart
// mDNS when the network interface reconnects. However, this callback is disabled
// in the arduino-pico framework. As a workaround, we block component setup until
// WiFi is connected, ensuring mDNS.begin() is called with an active connection.
bool WiFiComponent::can_proceed() {
if (!this->has_sta() || this->state_ == WIFI_COMPONENT_STATE_DISABLED || this->ap_setup_) {
return true;
}
return this->is_connected();
}
#endif
void WiFiComponent::set_reboot_timeout(uint32_t reboot_timeout) { this->reboot_timeout_ = reboot_timeout; }
bool WiFiComponent::is_connected() {
return this->state_ == WIFI_COMPONENT_STATE_STA_CONNECTED &&

View File

@@ -280,6 +280,10 @@ class WiFiComponent : public Component {
void retry_connect();
#ifdef USE_RP2040
bool can_proceed() override;
#endif
void set_reboot_timeout(uint32_t reboot_timeout);
bool is_connected();
@@ -291,6 +295,7 @@ class WiFiComponent : public Component {
void set_passive_scan(bool passive);
void save_wifi_sta(const std::string &ssid, const std::string &password);
// ========== INTERNAL METHODS ==========
// (In most use cases you won't need these)
/// Setup WiFi interface.
@@ -424,6 +429,8 @@ class WiFiComponent : public Component {
return true;
}
void connect_soon_();
void wifi_loop_();
bool wifi_mode_(optional<bool> sta, optional<bool> ap);
bool wifi_sta_pre_setup_();
@@ -529,6 +536,8 @@ class WiFiComponent : public Component {
bool enable_on_boot_;
bool got_ipv4_address_{false};
bool keep_scan_results_{false};
bool did_scan_this_cycle_{false};
bool skip_cooldown_next_cycle_{false};
// Pointers at the end (naturally aligned)
Trigger<> *connect_trigger_{new Trigger<>()};

View File

@@ -870,7 +870,13 @@ bssid_t WiFiComponent::wifi_bssid() {
return bssid;
}
std::string WiFiComponent::wifi_ssid() { return WiFi.SSID().c_str(); }
int8_t WiFiComponent::wifi_rssi() { return WiFi.status() == WL_CONNECTED ? WiFi.RSSI() : WIFI_RSSI_DISCONNECTED; }
int8_t WiFiComponent::wifi_rssi() {
if (WiFi.status() != WL_CONNECTED)
return WIFI_RSSI_DISCONNECTED;
int8_t rssi = WiFi.RSSI();
// Values >= 31 are error codes per NONOS SDK API, not valid RSSI readings
return rssi >= 31 ? WIFI_RSSI_DISCONNECTED : rssi;
}
int32_t WiFiComponent::get_wifi_channel() { return WiFi.channel(); }
network::IPAddress WiFiComponent::wifi_subnet_mask_() { return {(const ip_addr_t *) WiFi.subnetMask()}; }
network::IPAddress WiFiComponent::wifi_gateway_ip_() { return {(const ip_addr_t *) WiFi.gatewayIP()}; }

View File

@@ -412,6 +412,7 @@ bool WiFiComponent::wifi_scan_start_(bool passive) {
}
void WiFiComponent::wifi_scan_done_callback_() {
this->scan_result_.clear();
this->scan_done_ = true;
int16_t num = WiFi.scanComplete();
if (num < 0)
@@ -430,7 +431,6 @@ void WiFiComponent::wifi_scan_done_callback_() {
ssid.length() == 0);
}
WiFi.scanDelete();
this->scan_done_ = true;
}
#ifdef USE_WIFI_AP

View File

@@ -740,9 +740,10 @@ def has_at_most_one_key(*keys):
if not isinstance(obj, dict):
raise Invalid("expected dictionary")
number = sum(k in keys for k in obj)
if number > 1:
raise Invalid(f"Cannot specify more than one of {', '.join(keys)}.")
used = set(obj) & set(keys)
if len(used) > 1:
msg = "Cannot specify more than one of '" + "', '".join(used) + "'."
raise MultipleInvalid([Invalid(msg, path=[k]) for k in used])
return obj
return validate

View File

@@ -4,7 +4,7 @@ from enum import Enum
from esphome.enum import StrEnum
__version__ = "2025.11.0b2"
__version__ = "2025.11.5"
ALLOWED_NAME_CHARS = "abcdefghijklmnopqrstuvwxyz0123456789-_"
VALID_SUBSTITUTIONS_CHARACTERS = (
@@ -336,6 +336,7 @@ CONF_ENERGY = "energy"
CONF_ENTITY_CATEGORY = "entity_category"
CONF_ENTITY_ID = "entity_id"
CONF_ENUM_DATAPOINT = "enum_datapoint"
CONF_ENVIRONMENT_VARIABLES = "environment_variables"
CONF_EQUATION = "equation"
CONF_ESP8266_DISABLE_SSL_SUPPORT = "esp8266_disable_ssl_support"
CONF_ESPHOME = "esphome"

View File

@@ -9,8 +9,8 @@
#include "esphome/core/application.h"
#include "esphome/core/helpers.h"
#include <list>
#include <vector>
#include <forward_list>
namespace esphome {
@@ -433,9 +433,10 @@ template<typename... Ts> class WaitUntilAction : public Action<Ts...>, public Co
// Store for later processing
auto now = millis();
auto timeout = this->timeout_value_.optional_value(x...);
this->var_queue_.emplace_front(now, timeout, std::make_tuple(x...));
this->var_queue_.emplace_back(now, timeout, std::make_tuple(x...));
// Do immediate check with fresh timestamp
// Do immediate check with fresh timestamp - don't call loop() synchronously!
// Let the event loop call it to avoid reentrancy issues
if (this->process_queue_(now)) {
// Only enable loop if we still have pending items
this->enable_loop();
@@ -487,7 +488,7 @@ template<typename... Ts> class WaitUntilAction : public Action<Ts...>, public Co
}
Condition<Ts...> *condition_;
std::forward_list<std::tuple<uint32_t, optional<uint32_t>, std::tuple<Ts...>>> var_queue_{};
std::list<std::tuple<uint32_t, optional<uint32_t>, std::tuple<Ts...>>> var_queue_{};
};
template<typename... Ts> class UpdateComponentAction : public Action<Ts...> {

View File

@@ -17,6 +17,7 @@ from esphome.const import (
CONF_COMPILE_PROCESS_LIMIT,
CONF_DEBUG_SCHEDULER,
CONF_DEVICES,
CONF_ENVIRONMENT_VARIABLES,
CONF_ESPHOME,
CONF_FRIENDLY_NAME,
CONF_ID,
@@ -215,6 +216,11 @@ CONFIG_SCHEMA = cv.All(
cv.string_strict: cv.Any([cv.string], cv.string),
}
),
cv.Optional(CONF_ENVIRONMENT_VARIABLES, default={}): cv.Schema(
{
cv.string_strict: cv.string,
}
),
cv.Optional(CONF_ON_BOOT): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(StartupTrigger),
@@ -426,6 +432,12 @@ async def _add_platformio_options(pio_options):
cg.add_platformio_option(key, val)
@coroutine_with_priority(CoroPriority.FINAL)
async def _add_environment_variables(env_vars: dict[str, str]) -> None:
# Set environment variables for the build process
os.environ.update(env_vars)
@coroutine_with_priority(CoroPriority.AUTOMATION)
async def _add_automations(config):
for conf in config.get(CONF_ON_BOOT, []):
@@ -563,6 +575,9 @@ async def to_code(config: ConfigType) -> None:
if config[CONF_PLATFORMIO_OPTIONS]:
CORE.add_job(_add_platformio_options, config[CONF_PLATFORMIO_OPTIONS])
if config[CONF_ENVIRONMENT_VARIABLES]:
CORE.add_job(_add_environment_variables, config[CONF_ENVIRONMENT_VARIABLES])
# Process areas
all_areas: list[dict[str, str | core.ID]] = []
if CONF_AREA in config:

View File

@@ -202,7 +202,7 @@ template<typename T> class StatefulEntityBase : public EntityBase {
virtual bool has_state() const { return this->state_.has_value(); }
virtual const T &get_state() const { return this->state_.value(); }
virtual T get_state_default(T default_value) const { return this->state_.value_or(default_value); }
void invalidate_state() { this->set_state_({}); }
void invalidate_state() { this->set_new_state({}); }
void add_full_state_callback(std::function<void(optional<T> previous, optional<T> current)> &&callback) {
if (this->full_state_callbacks_ == nullptr)
@@ -224,20 +224,20 @@ template<typename T> class StatefulEntityBase : public EntityBase {
/**
* Set a new state for this entity. This will trigger callbacks only if the new state is different from the previous.
*
* @param state The new state.
* @param new_state The new state.
* @return True if the state was changed, false if it was the same as before.
*/
bool set_state_(const optional<T> &state) {
if (this->state_ != state) {
virtual bool set_new_state(const optional<T> &new_state) {
if (this->state_ != new_state) {
// call the full state callbacks with the previous and new state
if (this->full_state_callbacks_ != nullptr)
this->full_state_callbacks_->call(this->state_, state);
this->full_state_callbacks_->call(this->state_, new_state);
// trigger legacy callbacks only if the new state is valid and either the trigger on initial state is enabled or
// the previous state was valid
auto had_state = this->has_state();
this->state_ = state;
if (this->state_callbacks_ != nullptr && state.has_value() && (this->trigger_on_initial_state_ || had_state))
this->state_callbacks_->call(state.value());
this->state_ = new_state;
if (this->state_callbacks_ != nullptr && new_state.has_value() && (this->trigger_on_initial_state_ || had_state))
this->state_callbacks_->call(new_state.value());
return true;
}
return false;

View File

@@ -225,6 +225,9 @@ template<typename T> class FixedVector {
other.reset_();
}
// Allow conversion to std::vector
operator std::vector<T>() const { return {data_, data_ + size_}; }
FixedVector &operator=(FixedVector &&other) noexcept {
if (this != &other) {
// Delete our current data

View File

@@ -154,8 +154,8 @@ void HOT Scheduler::set_timer_common_(Component *component, SchedulerItem::Type
// For retries, check if there's a cancelled timeout first
if (is_retry && name_cstr != nullptr && type == SchedulerItem::TIMEOUT &&
(has_cancelled_timeout_in_container_(this->items_, component, name_cstr, /* match_retry= */ true) ||
has_cancelled_timeout_in_container_(this->to_add_, component, name_cstr, /* match_retry= */ true))) {
(has_cancelled_timeout_in_container_locked_(this->items_, component, name_cstr, /* match_retry= */ true) ||
has_cancelled_timeout_in_container_locked_(this->to_add_, component, name_cstr, /* match_retry= */ true))) {
// Skip scheduling - the retry was cancelled
#ifdef ESPHOME_DEBUG_SCHEDULER
ESP_LOGD(TAG, "Skipping retry '%s' - found cancelled item", name_cstr);
@@ -315,7 +315,7 @@ void Scheduler::full_cleanup_removed_items_() {
valid_items.push_back(std::move(item));
} else {
// Recycle removed items
this->recycle_item_(std::move(item));
this->recycle_item_main_loop_(std::move(item));
}
}
@@ -359,8 +359,7 @@ void HOT Scheduler::call(uint32_t now) {
std::unique_ptr<SchedulerItem> item;
{
LockGuard guard{this->lock_};
item = std::move(this->items_[0]);
this->pop_raw_();
item = this->pop_raw_locked_();
}
const char *name = item->get_name();
@@ -401,7 +400,7 @@ void HOT Scheduler::call(uint32_t now) {
// Don't run on failed components
if (item->component != nullptr && item->component->is_failed()) {
LockGuard guard{this->lock_};
this->pop_raw_();
this->recycle_item_main_loop_(this->pop_raw_locked_());
continue;
}
@@ -414,7 +413,7 @@ void HOT Scheduler::call(uint32_t now) {
{
LockGuard guard{this->lock_};
if (is_item_removed_(item.get())) {
this->pop_raw_();
this->recycle_item_main_loop_(this->pop_raw_locked_());
this->to_remove_--;
continue;
}
@@ -423,7 +422,7 @@ void HOT Scheduler::call(uint32_t now) {
// Single-threaded or multi-threaded with atomics: can check without lock
if (is_item_removed_(item.get())) {
LockGuard guard{this->lock_};
this->pop_raw_();
this->recycle_item_main_loop_(this->pop_raw_locked_());
this->to_remove_--;
continue;
}
@@ -443,14 +442,14 @@ void HOT Scheduler::call(uint32_t now) {
LockGuard guard{this->lock_};
auto executed_item = std::move(this->items_[0]);
// Only pop after function call, this ensures we were reachable
// during the function call and know if we were cancelled.
this->pop_raw_();
auto executed_item = this->pop_raw_locked_();
if (executed_item->remove) {
// We were removed/cancelled in the function call, stop
// We were removed/cancelled in the function call, recycle and continue
this->to_remove_--;
this->recycle_item_main_loop_(std::move(executed_item));
continue;
}
@@ -461,7 +460,7 @@ void HOT Scheduler::call(uint32_t now) {
this->to_add_.push_back(std::move(executed_item));
} else {
// Timeout completed - recycle it
this->recycle_item_(std::move(executed_item));
this->recycle_item_main_loop_(std::move(executed_item));
}
has_added_items |= !this->to_add_.empty();
@@ -476,7 +475,7 @@ void HOT Scheduler::process_to_add() {
for (auto &it : this->to_add_) {
if (is_item_removed_(it.get())) {
// Recycle cancelled items
this->recycle_item_(std::move(it));
this->recycle_item_main_loop_(std::move(it));
continue;
}
@@ -497,7 +496,7 @@ size_t HOT Scheduler::cleanup_() {
return this->items_.size();
// We must hold the lock for the entire cleanup operation because:
// 1. We're modifying items_ (via pop_raw_) which requires exclusive access
// 1. We're modifying items_ (via pop_raw_locked_) which requires exclusive access
// 2. We're decrementing to_remove_ which is also modified by other threads
// (though all modifications are already under lock)
// 3. Other threads read items_ when searching for items to cancel in cancel_item_locked_()
@@ -510,17 +509,18 @@ size_t HOT Scheduler::cleanup_() {
if (!item->remove)
break;
this->to_remove_--;
this->pop_raw_();
this->recycle_item_main_loop_(this->pop_raw_locked_());
}
return this->items_.size();
}
void HOT Scheduler::pop_raw_() {
std::unique_ptr<Scheduler::SchedulerItem> HOT Scheduler::pop_raw_locked_() {
std::pop_heap(this->items_.begin(), this->items_.end(), SchedulerItem::cmp);
// Instead of destroying, recycle the item
this->recycle_item_(std::move(this->items_.back()));
// Move the item out before popping - this is the item that was at the front of the heap
auto item = std::move(this->items_.back());
this->items_.pop_back();
return item;
}
// Helper to execute a scheduler item
@@ -556,28 +556,25 @@ bool HOT Scheduler::cancel_item_locked_(Component *component, const char *name_c
#ifndef ESPHOME_THREAD_SINGLE
// Mark items in defer queue as cancelled (they'll be skipped when processed)
if (type == SchedulerItem::TIMEOUT) {
total_cancelled += this->mark_matching_items_removed_(this->defer_queue_, component, name_cstr, type, match_retry);
total_cancelled +=
this->mark_matching_items_removed_locked_(this->defer_queue_, component, name_cstr, type, match_retry);
}
#endif /* not ESPHOME_THREAD_SINGLE */
// Cancel items in the main heap
// Special case: if the last item in the heap matches, we can remove it immediately
// (removing the last element doesn't break heap structure)
// We only mark items for removal here - never recycle directly.
// The main loop may be executing an item's callback right now, and recycling
// would destroy the callback while it's running (use-after-free).
// Only the main loop in call() should recycle items after execution completes.
if (!this->items_.empty()) {
auto &last_item = this->items_.back();
if (this->matches_item_(last_item, component, name_cstr, type, match_retry)) {
this->recycle_item_(std::move(this->items_.back()));
this->items_.pop_back();
total_cancelled++;
}
// For other items in heap, we can only mark for removal (can't remove from middle of heap)
size_t heap_cancelled = this->mark_matching_items_removed_(this->items_, component, name_cstr, type, match_retry);
size_t heap_cancelled =
this->mark_matching_items_removed_locked_(this->items_, component, name_cstr, type, match_retry);
total_cancelled += heap_cancelled;
this->to_remove_ += heap_cancelled; // Track removals for heap items
this->to_remove_ += heap_cancelled;
}
// Cancel items in to_add_
total_cancelled += this->mark_matching_items_removed_(this->to_add_, component, name_cstr, type, match_retry);
total_cancelled += this->mark_matching_items_removed_locked_(this->to_add_, component, name_cstr, type, match_retry);
return total_cancelled > 0;
}
@@ -609,13 +606,12 @@ uint64_t Scheduler::millis_64_(uint32_t now) {
if (now < last && (last - now) > HALF_MAX_UINT32) {
this->millis_major_++;
major++;
this->last_millis_ = now;
#ifdef ESPHOME_DEBUG_SCHEDULER
ESP_LOGD(TAG, "Detected true 32-bit rollover at %" PRIu32 "ms (was %" PRIu32 ")", now, last);
#endif /* ESPHOME_DEBUG_SCHEDULER */
}
// Only update if time moved forward
if (now > last) {
} else if (now > last) {
// Only update if time moved forward
this->last_millis_ = now;
}
@@ -748,7 +744,11 @@ bool HOT Scheduler::SchedulerItem::cmp(const std::unique_ptr<SchedulerItem> &a,
: (a->next_execution_high_ > b->next_execution_high_);
}
void Scheduler::recycle_item_(std::unique_ptr<SchedulerItem> item) {
// Recycle a SchedulerItem back to the pool for reuse.
// IMPORTANT: Caller must hold the scheduler lock before calling this function.
// This protects scheduler_item_pool_ from concurrent access by other threads
// that may be acquiring items from the pool in set_timer_common_().
void Scheduler::recycle_item_main_loop_(std::unique_ptr<SchedulerItem> item) {
if (!item)
return;

View File

@@ -219,7 +219,9 @@ class Scheduler {
// Returns the number of items remaining after cleanup
// IMPORTANT: This method should only be called from the main thread (loop task).
size_t cleanup_();
void pop_raw_();
// Remove and return the front item from the heap
// IMPORTANT: Caller must hold the scheduler lock before calling this function.
std::unique_ptr<SchedulerItem> pop_raw_locked_();
private:
// Helper to cancel items by name - must be called with lock held
@@ -243,8 +245,18 @@ class Scheduler {
}
// Helper function to check if item matches criteria for cancellation
inline bool HOT matches_item_(const std::unique_ptr<SchedulerItem> &item, Component *component, const char *name_cstr,
SchedulerItem::Type type, bool match_retry, bool skip_removed = true) const {
// IMPORTANT: Must be called with scheduler lock held
inline bool HOT matches_item_locked_(const std::unique_ptr<SchedulerItem> &item, Component *component,
const char *name_cstr, SchedulerItem::Type type, bool match_retry,
bool skip_removed = true) const {
// THREAD SAFETY: Check for nullptr first to prevent LoadProhibited crashes. On multi-threaded
// platforms, items can be moved out of defer_queue_ during processing, leaving nullptr entries.
// PR #11305 added nullptr checks in callers (mark_matching_items_removed_locked_() and
// has_cancelled_timeout_in_container_locked_()), but this check provides defense-in-depth: helper
// functions should be safe regardless of caller behavior.
// Fixes: https://github.com/esphome/esphome/issues/11940
if (!item)
return false;
if (item->component != component || item->type != type || (skip_removed && item->remove) ||
(match_retry && !item->is_retry)) {
return false;
@@ -260,8 +272,11 @@ class Scheduler {
return is_item_removed_(item) || (item->component != nullptr && item->component->is_failed());
}
// Helper to recycle a SchedulerItem
void recycle_item_(std::unique_ptr<SchedulerItem> item);
// Helper to recycle a SchedulerItem back to the pool.
// IMPORTANT: Only call from main loop context! Recycling clears the callback,
// so calling from another thread while the callback is executing causes use-after-free.
// IMPORTANT: Caller must hold the scheduler lock before calling this function.
void recycle_item_main_loop_(std::unique_ptr<SchedulerItem> item);
// Helper to perform full cleanup when too many items are cancelled
void full_cleanup_removed_items_();
@@ -304,8 +319,8 @@ class Scheduler {
// SAFETY: Moving out the unique_ptr leaves a nullptr in the vector at defer_queue_front_.
// This is intentional and safe because:
// 1. The vector is only cleaned up by cleanup_defer_queue_locked_() at the end of this function
// 2. Any code iterating defer_queue_ MUST check for nullptr items (see mark_matching_items_removed_
// and has_cancelled_timeout_in_container_ in scheduler.h)
// 2. Any code iterating defer_queue_ MUST check for nullptr items (see mark_matching_items_removed_locked_
// and has_cancelled_timeout_in_container_locked_ in scheduler.h)
// 3. The lock protects concurrent access, but the nullptr remains until cleanup
item = std::move(this->defer_queue_[this->defer_queue_front_]);
this->defer_queue_front_++;
@@ -317,7 +332,10 @@ class Scheduler {
now = this->execute_item_(item.get(), now);
}
// Recycle the defer item after execution
this->recycle_item_(std::move(item));
{
LockGuard lock(this->lock_);
this->recycle_item_main_loop_(std::move(item));
}
}
// If we've consumed all items up to the snapshot point, clean up the dead space
@@ -393,10 +411,10 @@ class Scheduler {
// Helper to mark matching items in a container as removed
// Returns the number of items marked for removal
// IMPORTANT: Caller must hold the scheduler lock before calling this function.
// IMPORTANT: Must be called with scheduler lock held
template<typename Container>
size_t mark_matching_items_removed_(Container &container, Component *component, const char *name_cstr,
SchedulerItem::Type type, bool match_retry) {
size_t mark_matching_items_removed_locked_(Container &container, Component *component, const char *name_cstr,
SchedulerItem::Type type, bool match_retry) {
size_t count = 0;
for (auto &item : container) {
// Skip nullptr items (can happen in defer_queue_ when items are being processed)
@@ -405,7 +423,7 @@ class Scheduler {
// the vector can still contain nullptr items from the processing loop. This check prevents crashes.
if (!item)
continue;
if (this->matches_item_(item, component, name_cstr, type, match_retry)) {
if (this->matches_item_locked_(item, component, name_cstr, type, match_retry)) {
// Mark item for removal (platform-specific)
this->set_item_removed_(item.get(), true);
count++;
@@ -415,9 +433,10 @@ class Scheduler {
}
// Template helper to check if any item in a container matches our criteria
// IMPORTANT: Must be called with scheduler lock held
template<typename Container>
bool has_cancelled_timeout_in_container_(const Container &container, Component *component, const char *name_cstr,
bool match_retry) const {
bool has_cancelled_timeout_in_container_locked_(const Container &container, Component *component,
const char *name_cstr, bool match_retry) const {
for (const auto &item : container) {
// Skip nullptr items (can happen in defer_queue_ when items are being processed)
// The defer_queue_ uses index-based processing: items are std::moved out but left in the
@@ -426,8 +445,8 @@ class Scheduler {
if (!item)
continue;
if (is_item_removed_(item.get()) &&
this->matches_item_(item, component, name_cstr, SchedulerItem::TIMEOUT, match_retry,
/* skip_removed= */ false)) {
this->matches_item_locked_(item, component, name_cstr, SchedulerItem::TIMEOUT, match_retry,
/* skip_removed= */ false)) {
return true;
}
}

View File

@@ -192,7 +192,7 @@ def get_esphome_device_ip(
data = json.loads(payload)
if "name" not in data or data["name"] != dev_name:
_LOGGER.Warn("Wrong device answer")
_LOGGER.warning("Wrong device answer")
return
dev_ip = []

View File

@@ -1,8 +1,12 @@
from collections.abc import Callable
import importlib
import logging
import os
from pathlib import Path
import re
import shutil
import stat
from types import TracebackType
from esphome import loader
from esphome.config import iter_component_configs, iter_components
@@ -121,7 +125,7 @@ def update_storage_json() -> None:
)
else:
_LOGGER.info("Core config or version changed, cleaning build files...")
clean_build()
clean_build(clear_pio_cache=False)
elif storage_should_update_cmake_cache(old, new):
_LOGGER.info("Integrations changed, cleaning cmake cache...")
clean_cmake_cache()
@@ -301,9 +305,24 @@ def clean_cmake_cache():
pioenvs_cmake_path.unlink()
def clean_build():
import shutil
def _rmtree_error_handler(
func: Callable[[str], object],
path: str,
exc_info: tuple[type[BaseException], BaseException, TracebackType | None],
) -> None:
"""Error handler for shutil.rmtree to handle read-only files on Windows.
On Windows, git pack files and other files may be marked read-only,
causing shutil.rmtree to fail with "Access is denied". This handler
removes the read-only flag and retries the deletion.
"""
if os.access(path, os.W_OK):
raise exc_info[1].with_traceback(exc_info[2])
os.chmod(path, stat.S_IWUSR | stat.S_IRUSR)
func(path)
def clean_build(clear_pio_cache: bool = True):
# Allow skipping cache cleaning for integration tests
if os.environ.get("ESPHOME_SKIP_CLEAN_BUILD"):
_LOGGER.warning("Skipping build cleaning (ESPHOME_SKIP_CLEAN_BUILD set)")
@@ -312,16 +331,19 @@ def clean_build():
pioenvs = CORE.relative_pioenvs_path()
if pioenvs.is_dir():
_LOGGER.info("Deleting %s", pioenvs)
shutil.rmtree(pioenvs)
shutil.rmtree(pioenvs, onerror=_rmtree_error_handler)
piolibdeps = CORE.relative_piolibdeps_path()
if piolibdeps.is_dir():
_LOGGER.info("Deleting %s", piolibdeps)
shutil.rmtree(piolibdeps)
shutil.rmtree(piolibdeps, onerror=_rmtree_error_handler)
dependencies_lock = CORE.relative_build_path("dependencies.lock")
if dependencies_lock.is_file():
_LOGGER.info("Deleting %s", dependencies_lock)
dependencies_lock.unlink()
if not clear_pio_cache:
return
# Clean PlatformIO cache to resolve CMake compiler detection issues
# This helps when toolchain paths change or get corrupted
try:
@@ -334,13 +356,17 @@ def clean_build():
cache_dir = Path(config.get("platformio", "cache_dir"))
if cache_dir.is_dir():
_LOGGER.info("Deleting PlatformIO cache %s", cache_dir)
shutil.rmtree(cache_dir)
shutil.rmtree(cache_dir, onerror=_rmtree_error_handler)
def clean_all(configuration: list[str]):
import shutil
data_dirs = [Path(dir) / ".esphome" for dir in configuration]
data_dirs = []
for config in configuration:
item = Path(config)
if item.is_file() and item.suffix in (".yaml", ".yml"):
data_dirs.append(item.parent / ".esphome")
else:
data_dirs.append(item / ".esphome")
if is_ha_addon():
data_dirs.append(Path("/data"))
if "ESPHOME_DATA_DIR" in os.environ:
@@ -355,7 +381,7 @@ def clean_all(configuration: list[str]):
if item.is_file() and not item.name.endswith(".json"):
item.unlink()
elif item.is_dir() and item.name != "storage":
shutil.rmtree(item)
shutil.rmtree(item, onerror=_rmtree_error_handler)
# Clean PlatformIO project files
try:
@@ -369,7 +395,7 @@ def clean_all(configuration: list[str]):
path = Path(config.get("platformio", pio_dir))
if path.is_dir():
_LOGGER.info("Deleting PlatformIO %s %s", pio_dir, path)
shutil.rmtree(path)
shutil.rmtree(path, onerror=_rmtree_error_handler)
GITIGNORE_CONTENT = """# Gitignore settings for ESPHome

View File

@@ -1,6 +1,18 @@
"""Tests for the web_server OTA platform."""
from __future__ import annotations
from collections.abc import Callable
import logging
from typing import Any
import pytest
from esphome import config_validation as cv
from esphome.components.web_server.ota import _web_server_ota_final_validate
from esphome.const import CONF_ID, CONF_OTA, CONF_PLATFORM, CONF_WEB_SERVER
from esphome.core import ID
import esphome.final_validate as fv
def test_web_server_ota_generated(generate_main: Callable[[str], str]) -> None:
@@ -100,3 +112,144 @@ def test_web_server_ota_esp8266(generate_main: Callable[[str], str]) -> None:
# Check web server OTA component is present
assert "WebServerOTAComponent" in main_cpp
assert "web_server::WebServerOTAComponent" in main_cpp
@pytest.mark.parametrize(
("ota_configs", "expected_count", "warning_expected"),
[
pytest.param(
[
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web", is_manual=False),
}
],
1,
False,
id="single_instance_no_merge",
),
pytest.param(
[
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web_1", is_manual=False),
},
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web_2", is_manual=False),
},
],
1,
True,
id="two_instances_merged",
),
pytest.param(
[
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web_1", is_manual=False),
},
{
CONF_PLATFORM: "esphome",
CONF_ID: ID("ota_esphome", is_manual=False),
},
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web_2", is_manual=False),
},
],
2,
True,
id="mixed_platforms_web_server_merged",
),
],
)
def test_web_server_ota_instance_merging(
ota_configs: list[dict[str, Any]],
expected_count: int,
warning_expected: bool,
caplog: pytest.LogCaptureFixture,
) -> None:
"""Test web_server OTA instance merging behavior."""
full_conf = {CONF_OTA: ota_configs.copy()}
token = fv.full_config.set(full_conf)
try:
with caplog.at_level(logging.WARNING):
_web_server_ota_final_validate({})
updated_conf = fv.full_config.get()
# Verify total number of OTA platforms
assert len(updated_conf[CONF_OTA]) == expected_count
# Verify warning
if warning_expected:
assert any(
"Found and merged" in record.message
and "web_server OTA" in record.message
for record in caplog.records
), "Expected merge warning not found in log"
else:
assert len(caplog.records) == 0, "Unexpected warnings logged"
finally:
fv.full_config.reset(token)
def test_web_server_ota_consistent_manual_ids(
caplog: pytest.LogCaptureFixture,
) -> None:
"""Test that consistent manual IDs can be merged successfully."""
ota_configs = [
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web", is_manual=True),
},
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web", is_manual=True),
},
]
full_conf = {CONF_OTA: ota_configs}
token = fv.full_config.set(full_conf)
try:
with caplog.at_level(logging.WARNING):
_web_server_ota_final_validate({})
updated_conf = fv.full_config.get()
assert len(updated_conf[CONF_OTA]) == 1
assert updated_conf[CONF_OTA][0][CONF_ID].id == "ota_web"
assert any(
"Found and merged" in record.message and "web_server OTA" in record.message
for record in caplog.records
)
finally:
fv.full_config.reset(token)
def test_web_server_ota_inconsistent_manual_ids() -> None:
"""Test that inconsistent manual IDs raise an error."""
ota_configs = [
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web_1", is_manual=True),
},
{
CONF_PLATFORM: CONF_WEB_SERVER,
CONF_ID: ID("ota_web_2", is_manual=True),
},
]
full_conf = {CONF_OTA: ota_configs}
token = fv.full_config.set(full_conf)
try:
with pytest.raises(
cv.Invalid,
match="Found multiple web_server OTA configurations but id is inconsistent",
):
_web_server_ota_final_validate({})
finally:
fv.full_config.reset(token)

View File

@@ -0,0 +1 @@
"""Tests for SNTP component."""

View File

@@ -0,0 +1,22 @@
esphome:
name: sntp-test
esp32:
board: esp32dev
framework:
type: esp-idf
wifi:
ssid: "testssid"
password: "testpassword"
# Test multiple SNTP instances that should be merged
time:
- platform: sntp
servers:
- 192.168.1.1
- pool.ntp.org
- platform: sntp
servers:
- pool.ntp.org
- 192.168.1.2

View File

@@ -0,0 +1,238 @@
"""Tests for SNTP time configuration validation."""
from __future__ import annotations
import logging
from typing import Any
import pytest
from esphome import config_validation as cv
from esphome.components.sntp.time import CONF_SNTP, _sntp_final_validate
from esphome.const import CONF_ID, CONF_PLATFORM, CONF_SERVERS, CONF_TIME
from esphome.core import ID
import esphome.final_validate as fv
@pytest.mark.parametrize(
("time_configs", "expected_count", "expected_servers", "warning_messages"),
[
pytest.param(
[
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time", is_manual=False),
CONF_SERVERS: ["192.168.1.1", "pool.ntp.org"],
}
],
1,
["192.168.1.1", "pool.ntp.org"],
[],
id="single_instance_no_merge",
),
pytest.param(
[
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_1", is_manual=False),
CONF_SERVERS: ["192.168.1.1", "pool.ntp.org"],
},
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_2", is_manual=False),
CONF_SERVERS: ["192.168.1.2"],
},
],
1,
["192.168.1.1", "pool.ntp.org", "192.168.1.2"],
["Found and merged 2 SNTP time configurations into one instance"],
id="two_instances_merged",
),
pytest.param(
[
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_1", is_manual=False),
CONF_SERVERS: ["192.168.1.1", "pool.ntp.org"],
},
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_2", is_manual=False),
CONF_SERVERS: ["pool.ntp.org", "192.168.1.2"],
},
],
1,
["192.168.1.1", "pool.ntp.org", "192.168.1.2"],
["Found and merged 2 SNTP time configurations into one instance"],
id="deduplication_preserves_order",
),
pytest.param(
[
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_1", is_manual=False),
CONF_SERVERS: ["192.168.1.1", "pool.ntp.org"],
},
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_2", is_manual=False),
CONF_SERVERS: ["192.168.1.2", "pool2.ntp.org"],
},
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_3", is_manual=False),
CONF_SERVERS: ["pool3.ntp.org"],
},
],
1,
["192.168.1.1", "pool.ntp.org", "192.168.1.2"],
[
"SNTP supports maximum 3 servers. Dropped excess server(s): ['pool2.ntp.org', 'pool3.ntp.org']",
"Found and merged 3 SNTP time configurations into one instance",
],
id="three_instances_drops_excess_servers",
),
pytest.param(
[
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_1", is_manual=False),
CONF_SERVERS: [
"192.168.1.1",
"pool.ntp.org",
"pool.ntp.org",
"192.168.1.1",
],
},
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_2", is_manual=False),
CONF_SERVERS: ["pool.ntp.org", "192.168.1.2"],
},
],
1,
["192.168.1.1", "pool.ntp.org", "192.168.1.2"],
["Found and merged 2 SNTP time configurations into one instance"],
id="deduplication_multiple_duplicates",
),
],
)
def test_sntp_instance_merging(
time_configs: list[dict[str, Any]],
expected_count: int,
expected_servers: list[str],
warning_messages: list[str],
caplog: pytest.LogCaptureFixture,
) -> None:
"""Test SNTP instance merging behavior."""
# Create a mock full config with time configs
full_conf = {CONF_TIME: time_configs.copy()}
# Set the context var
token = fv.full_config.set(full_conf)
try:
with caplog.at_level(logging.WARNING):
_sntp_final_validate({})
# Get the updated config
updated_conf = fv.full_config.get()
# Check if merging occurred
if len(time_configs) > 1:
# Verify only one SNTP instance remains
sntp_instances = [
tc
for tc in updated_conf[CONF_TIME]
if tc.get(CONF_PLATFORM) == CONF_SNTP
]
assert len(sntp_instances) == expected_count
# Verify server list
assert sntp_instances[0][CONF_SERVERS] == expected_servers
# Verify warnings
for expected_msg in warning_messages:
assert any(
expected_msg in record.message for record in caplog.records
), f"Expected warning message '{expected_msg}' not found in log"
else:
# Single instance should not trigger merging or warnings
assert len(caplog.records) == 0
# Config should be unchanged
assert updated_conf[CONF_TIME] == time_configs
finally:
fv.full_config.reset(token)
def test_sntp_inconsistent_manual_ids() -> None:
"""Test that inconsistent manual IDs raise an error."""
# Create configs with manual IDs that are inconsistent
time_configs = [
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_1", is_manual=True),
CONF_SERVERS: ["192.168.1.1"],
},
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_2", is_manual=True),
CONF_SERVERS: ["192.168.1.2"],
},
]
full_conf = {CONF_TIME: time_configs}
token = fv.full_config.set(full_conf)
try:
with pytest.raises(
cv.Invalid,
match="Found multiple SNTP configurations but id is inconsistent",
):
_sntp_final_validate({})
finally:
fv.full_config.reset(token)
def test_sntp_with_other_time_platforms(caplog: pytest.LogCaptureFixture) -> None:
"""Test that SNTP merging doesn't affect other time platforms."""
time_configs = [
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_1", is_manual=False),
CONF_SERVERS: ["192.168.1.1"],
},
{
CONF_PLATFORM: "homeassistant",
CONF_ID: ID("homeassistant_time", is_manual=False),
},
{
CONF_PLATFORM: CONF_SNTP,
CONF_ID: ID("sntp_time_2", is_manual=False),
CONF_SERVERS: ["192.168.1.2"],
},
]
full_conf = {CONF_TIME: time_configs.copy()}
token = fv.full_config.set(full_conf)
try:
with caplog.at_level(logging.WARNING):
_sntp_final_validate({})
updated_conf = fv.full_config.get()
# Should have 2 time platforms: 1 merged SNTP + 1 homeassistant
assert len(updated_conf[CONF_TIME]) == 2
# Find the platforms
platforms = {tc[CONF_PLATFORM] for tc in updated_conf[CONF_TIME]}
assert platforms == {CONF_SNTP, "homeassistant"}
# Verify SNTP was merged
sntp_instances = [
tc for tc in updated_conf[CONF_TIME] if tc[CONF_PLATFORM] == CONF_SNTP
]
assert len(sntp_instances) == 1
assert sntp_instances[0][CONF_SERVERS] == ["192.168.1.1", "192.168.1.2"]
finally:
fv.full_config.reset(token)

View File

@@ -0,0 +1,27 @@
esp32:
variant: esp32p4
flash_size: 32MB
cpu_frequency: 400MHz
framework:
type: esp-idf
advanced:
enable_idf_experimental_features: yes
ota:
platform: esphome
wifi:
ssid: MySSID
password: password1
esp32_hosted:
variant: ESP32C6
slot: 1
active_high: true
reset_pin: GPIO15
cmd_pin: GPIO13
clk_pin: GPIO12
d0_pin: GPIO11
d1_pin: GPIO10
d2_pin: GPIO9
d3_pin: GPIO8

View File

@@ -2,6 +2,9 @@ esphome:
debug_scheduler: true
platformio_options:
board_build.flash_mode: dio
environment_variables:
TEST_ENV_VAR: "test_value"
BUILD_NUMBER: "12345"
area:
id: testing_area
name: Testing Area

View File

@@ -115,8 +115,8 @@ wifi:
password: PASSWORD123
time:
platform: sntp
id: time_id
- platform: sntp
id: sntp_time
text:
- id: lvgl_text

View File

@@ -478,19 +478,19 @@ lvgl:
id: hello_label
text:
time_format: "%c"
time: time_id
time: sntp_time
- lvgl.label.update:
id: hello_label
text:
time_format: "%c"
time: !lambda return id(time_id).now();
time: !lambda return id(sntp_time).now();
- lvgl.label.update:
id: hello_label
text:
time_format: "%c"
time: !lambda |-
ESP_LOGD("label", "multi-line lambda");
return id(time_id).now();
return id(sntp_time).now();
on_value:
logger.log:
format: "state now %d"
@@ -703,7 +703,9 @@ lvgl:
on_value:
- lvgl.spinbox.update:
id: spinbox_id
value: !lambda return x;
value: !lambda |-
static float yyy = 83.0;
return yyy + .8;
- button:
styles: spin_button
id: spin_up
@@ -879,6 +881,7 @@ lvgl:
grid_columns: [40, fr(1), fr(1)]
pad_row: 6px
pad_column: 0
multiple_widgets_per_cell: true
widgets:
- image:
grid_cell_row_pos: 0
@@ -903,6 +906,10 @@ lvgl:
grid_cell_row_pos: 1
grid_cell_column_pos: 0
text: "Grid cell 1/0"
- label:
grid_cell_row_pos: 1
grid_cell_column_pos: 0
text: "Duplicate for 1/0"
- label:
styles: bdr_style
grid_cell_row_pos: 1

View File

@@ -4,6 +4,7 @@ wifi:
time:
- platform: sntp
id: sntp_time
mqtt:
broker: "192.168.178.84"

View File

@@ -3,6 +3,7 @@ wifi:
time:
- platform: sntp
id: sntp_time
sensor:
- platform: uptime

View File

@@ -4,8 +4,10 @@ wifi:
time:
- platform: sntp
id: sntp_time
wireguard:
time_id: sntp_time
address: 172.16.34.100
netmask: 255.255.255.0
# NEVER use the following keys for your VPN -- they are now public!

View File

@@ -7,7 +7,7 @@ This directory contains end-to-end integration tests for ESPHome, focusing on te
- `conftest.py` - Common fixtures and utilities
- `const.py` - Constants used throughout the integration tests
- `types.py` - Type definitions for fixtures and functions
- `state_utils.py` - State handling utilities (e.g., `InitialStateHelper`, `build_key_to_entity_mapping`)
- `state_utils.py` - State handling utilities (e.g., `InitialStateHelper`, `find_entity`, `require_entity`)
- `fixtures/` - YAML configuration files for tests
- `test_*.py` - Individual test files
@@ -53,6 +53,28 @@ The `InitialStateHelper` class solves a common problem in integration tests: whe
**Future work:**
Consider converting existing integration tests to use `InitialStateHelper` for more reliable state tracking and to eliminate race conditions related to initial state broadcasts.
#### Entity Lookup Helpers (`state_utils.py`)
Two helper functions simplify finding entities in test code:
**`find_entity(entities, object_id_substring, entity_type=None)`**
- Finds an entity by searching for a substring in its `object_id` (case-insensitive)
- Optionally filters by entity type (e.g., `BinarySensorInfo`)
- Returns `None` if not found
**`require_entity(entities, object_id_substring, entity_type=None, description=None)`**
- Same as `find_entity` but raises `AssertionError` if not found
- Use `description` parameter for clearer error messages
```python
from aioesphomeapi import BinarySensorInfo
from .state_utils import require_entity
# Find entities with clear error messages
binary_sensor = require_entity(entities, "test_sensor", BinarySensorInfo)
button = require_entity(entities, "set_true", description="Set True button")
```
### Writing Tests
The simplest way to write a test is to use the `run_compiled` and `api_client_connected` fixtures:

View File

@@ -0,0 +1,39 @@
esphome:
name: test-binary-sensor-invalidate
host:
api:
batch_delay: 0ms # Disable batching to receive all state updates
logger:
level: DEBUG
# Template binary sensor that we can control
binary_sensor:
- platform: template
name: "Test Binary Sensor"
id: test_binary_sensor
# Buttons to control the binary sensor state
button:
- platform: template
name: "Set True"
id: set_true_button
on_press:
- binary_sensor.template.publish:
id: test_binary_sensor
state: true
- platform: template
name: "Set False"
id: set_false_button
on_press:
- binary_sensor.template.publish:
id: test_binary_sensor
state: false
- platform: template
name: "Invalidate State"
id: invalidate_button
on_press:
- binary_sensor.invalidate_state:
id: test_binary_sensor

View File

@@ -0,0 +1,131 @@
esphome:
name: test-script-delay-params
host:
api:
actions:
# Test case from issue #12044: parent script with repeat calling child with delay
- action: test_repeat_with_delay
then:
- logger.log: "=== TEST: Repeat loop calling script with delay and parameters ==="
- script.execute: father_script
# Test case from issue #12043: script.wait with delayed child script
- action: test_script_wait
then:
- logger.log: "=== TEST: script.wait with delayed child script ==="
- script.execute: show_start_page
- script.wait: show_start_page
- logger.log: "After wait: script completed successfully"
# Test: Delay with different parameter types
- action: test_delay_param_types
then:
- logger.log: "=== TEST: Delay with various parameter types ==="
- script.execute:
id: delay_with_int
val: 42
- delay: 50ms
- script.execute:
id: delay_with_string
msg: "test message"
- delay: 50ms
- script.execute:
id: delay_with_float
num: 3.14
logger:
level: DEBUG
script:
# Reproduces issue #12044: child script with conditional delay
- id: son_script
mode: single
parameters:
iteration: int
then:
- logger.log:
format: "Son script started with iteration %d"
args: ['iteration']
- if:
condition:
lambda: 'return iteration >= 5;'
then:
- logger.log:
format: "Son script delaying for iteration %d"
args: ['iteration']
- delay: 100ms
- logger.log:
format: "Son script finished with iteration %d"
args: ['iteration']
# Reproduces issue #12044: parent script with repeat loop
- id: father_script
mode: single
then:
- repeat:
count: 10
then:
- logger.log:
format: "Father iteration %d: calling son"
args: ['iteration']
- script.execute:
id: son_script
iteration: !lambda 'return iteration;'
- script.wait: son_script
- logger.log:
format: "Father iteration %d: son finished, wait returned"
args: ['iteration']
# Reproduces issue #12043: script.wait hangs
- id: show_start_page
mode: single
then:
- logger.log: "Start page: beginning"
- delay: 100ms
- logger.log: "Start page: after delay"
- delay: 100ms
- logger.log: "Start page: completed"
# Test delay with int parameter
- id: delay_with_int
mode: single
parameters:
val: int
then:
- logger.log:
format: "Int test: before delay, val=%d"
args: ['val']
- delay: 50ms
- logger.log:
format: "Int test: after delay, val=%d"
args: ['val']
# Test delay with string parameter
- id: delay_with_string
mode: single
parameters:
msg: string
then:
- logger.log:
format: "String test: before delay, msg=%s"
args: ['msg.c_str()']
- delay: 50ms
- logger.log:
format: "String test: after delay, msg=%s"
args: ['msg.c_str()']
# Test delay with float parameter
- id: delay_with_float
mode: single
parameters:
num: float
then:
- logger.log:
format: "Float test: before delay, num=%.2f"
args: ['num']
- delay: 50ms
- logger.log:
format: "Float test: after delay, num=%.2f"
args: ['num']

Some files were not shown because too many files have changed in this diff Show More