I love it when a plan comes together #340
Replies: 4 comments
-
Thanx. I think I'm in that crowd as I've made a bit of trouble recently. Plus, the world revolves around me, so let's just go with that. :-) I've spent a lot of time in the code and the project recently and plan to spend more. The more I quibble about tiny items, the more I realize I personally have a bigger problem. I can't figure out what we (this project) are trying to build. I touched on many of these questions yesterday at #330 (comment). Rutger didn't seem to think they were bad questions, but deep inside a bugreport probably isn't the right place for them. Let's try having a discussion about them here. Maybe the answers appear in some doc somewhere, but they've not popped out to me and the answers to many of them change future (re)designs in some important ways. This list is numbered not so we can have a committee ratholing on it, but hopefully to make it easier to discuss and cross-reference items.
3a) (Cos' I don't feel like renumbering things.) Set hard performance targets and quantify them in automation. Commit that "A 240Mhz ESP32-S3 must be able to render effect E to X*Y Neopixels at F Hz" - and any Cl regressing that is broken. (Hints: we can measure NeoPixel performance without NeoPixels actually being present on the bus and if it comes down to automating Hub75 performance, I'd. bet I could build a dollar CH32V03 device that emulated enough of the bus to acknowledge bus transfers at a similar speed and just throw the frames away to allow lab testing without needing $$$ panels and power.) Reason: it's important to have a couple of threshold numbers that say "X is important to us" and as a stronghold against creep and bloat.
Unifying that documentation and text in the code (even if not the code itself) is a good opportunity for a non-programmer to help. Any takers?
THere's enough conversation starter for now. Maybe there's a clear vision for all this somewhere and maybe there's just not - and that's OK for a project this size, age, and complexity. Maybe I just need to better understand it, but I'll bet I'm not the only one confused about a lot of these external dependencies and interactions. Can we please kick around some guidelines to get us all pulling in the same direction? I don't want to do mission statements or such, but I could use some serious help understanding what we're building so/if I can be a part of it. I know that some of the above sounds like it's wandering into the weeds (or being dragged - I know I do that.[1]), but it has very tangible impacts in the code if we can make some hard decisions on actual product focus. If FastLED's addLED<> is a compile-time thing because it saves 7 bytes that were needed on ATMega and we don't care about ATMega, we can fix - or replace - that code if it ultimately lets is add and remove channels, strands, and bulbs from a web interface. (See, a tangible example from this very weekend....) So here I'm just tossing out some questions about what we already have, some adjacent problems and opportunities that might become growth opportunities for us, and completely new places we can go or choose to ignore. I hope we can get a good conversation going between the people that got us this far and the new generation that's trying to help carry the water. Thanx for listening. I know I can be a lot. [1] Similarly, I know I ramble. Feel free to ask for clarifications, but please resist the urge to remind me of the situations that make that true. |
Beta Was this translation helpful? Give feedback.
-
In summary, there are a number of projects that are actively deployed, such as:
LEDSTRIP - Used to drive LED strips over WiFi as controlled by NightDriverServer
LEDSTRIP_FEATHER - Same but for the S3
ATOMLIGHT - Used for the Atomic Fire Lamp
LANTERN - Used for a flickering candle style lamp replacement
XMASTREES - A bank of 5 lit Christmas trees
TREE - Used for the little Bonsai tree in the videos
(Etc)
Then you have Music projects that can run on WS128B:
SPECTRUM - 48x16 spectrum analyzer
Then you have Mesmerizer itself
MESMERIZER - The 64x32 HUB75 project
For now, we’re 100% dependent on and bought into the ESP32, so we pick up most of the hardware assumptions from it (dual cores, RAM/PRAM split, etc)
Networking is optional per project, based on ENABLE_WIFI and WAIT_FOR_WIFI
Now, most every one of these is a hobbyist project, where they’re going to have to tweak the code and build it and flash it themselves. I’m fine with that approach.
For Mesmerizer, however, the goal is to ship a KIT that includes a matrix, board, and power wires. Buyer snaps it together, plugs it into the USB port, and runs a website to install the flash if it’s not already flashed from the source.
In other words, then, Mesmerizer should be turnkey and easy to use, the others remain hobbyist projects, but equally important, at least to me!
Thanks
Dave
… On Jul 5, 2023, at 2:48 AM, Robert Lipe ***@***.***> wrote:
Thanx. I think I'm in that crowd as I've made a bit of trouble recently. Plus, the world revolves around me, so let's just go with that. :-)
I've spent a lot of time in the code and the project recently and plan to spend more. The more I quibble about tiny items, the more I realize I personally have a bigger problem. I can't figure out what we (this project) are trying to build. I touched on many of these questions yesterday at #330 (comment) <#330 (comment)>. Rutger didn't seem to think they were bad questions, but deep inside a bugreport probably isn't the right place for them. Let's try having a discussion about them here. Maybe the answers appear in some doc somewhere, but they've not popped out to me and the answers to many of them change future (re)designs in some important ways. This list is numbered not so we can have a committee ratholing on it, but hopefully to make it easier to discuss and cross-reference items.
What is the observed ratio of NightDriver users that use it as an Instructable-style project and just build a few projects quite literally from the code vs. the user base using it as an API in some wild and crazy thing they're building and extensively modifying and integrating the code in some way? Is our expected user base closer to that of WLED <https://kno.wled.ge/> or to Framebuffer_GFX <https://github.com/marcmerlin/Framebuffer_GFX>? Reason: Building an API vs. building an end-user product is different. Example: API users might not care about recompiling to change some template goo or a pin number. End users want to click on some buttons that say "I have 234 of device X on pin Y and want to run effect Z"
How about the ratio of strip vs. matrix (and matrix neopixel vs. HUB75 evenwithin "matrix") usage? In terms of compute power, pushing video to an array of HUB75's (Thousands to tens of thousands of pixels with demanding refresh needs) is just a different problem than a hundred NeoPixels (that have a different kind of demanding refresh needs). It might be on a similar order as thousands of NeoPixels attached to an array of different controllers. SO: Is the primary focus strips, NeoPixel arrays, and/or chained Hub75's? Reason: Some effects look reasonable between a 1d strip and high-res 2D panels but a NASDAQ ticker tape in 1d isn't very fun. This target helps drive target hardware choices. An AtMega can drive many tens of NeoPixels. A 64x64 HUB75 takes about 100K just for the color data alone, so that pretty quickly blows out small micros even before you take advantage of the awesome chaining. If you're driving ten 4kPixel HUB75's, you probably want Pi-class horsepower and they just have conflicting designs and data structures. This gives a natural segue to...
What's the actual hardware and effect target range? We know the current 'mesmerizer' configuration requires 2 cores, even if it doesn't hard-throw on that. It doesn't seem like crazy talk to allow trying it on, say, a single D1 processor (1Ghz, but single core), for example. ESP32 has hardware (RMT) that helps with NeoPixels, but it's not clear yet if FastLED's actually taking advantage of that. Again, a fast CPU could probably bit-bang a few hundred WS28xxs using the SPI lookup approach. We may not need to pick exact chips/SBCs to support, but we should draw a circle somewhere on the spectrum somewhere between AtMega (there's lot of Arduino code here...) and an 8-core Pi. Even the scale between 8266 and ESP32 (same chipmaker, a few years apart) is a pretty wide band of parts. (#330 <#330> continues along in example and background of this) BL602 has an IR controller on par with ESP32's, but the chip is dirt cheap - under $2 in single quantities for low pin count configurations. Reason: Setting a goal for expected target hardware reduces algorithmic/code conflict and thrash. Being able to just say something is out of scope reduces wasted goal-chasing. ATMega may care deeply about dozens of bytes, while a Pi might be able to assign one core per channel to drive multiple, different video effects.
3a) (Cos' I don't feel like renumbering things.) Set hard performance targets and quantify them in automation. Commit that "A 240Mhz ESP32-S3 must be able to render effect E to X*Y Neopixels at F Hz" - and any Cl regressing that is broken. (Hints: we can measure NeoPixel performance without NeoPixels actually being present on the bus and if it comes down to automating Hub75 performance, I'd. bet I could build a dollar CH32V03 device that emulated enough of the bus to acknowledge bus transfers at a similar speed and just throw the frames away to allow lab testing without needing $$$ panels and power.) Reason: it's important to have a couple of threshold numbers that say "X is important to us" and as a stronghold against creep and bloat.
Is networking in the MCU a requirement? This simplifies device configuration, but has costs, of course. (4A) Similarly do we care if a host computer (running some random OS) is required for some configurations? Mesmerizer running effects on a PC to a $5 USB->I2c/SPI bridge (something like CH347) and/or multiplexors like TCA9548A or 74HC153 to drive an array of NeoPixels and/or HUB75s is not the craziest idea that any of us will hear today but it would till a lot of code. Reason it matters: it lets us cross off the really low end MCUs and lets us safely say that all UI is on the web. But if we support hardware without this, have some some (non-trivial) inventing and building to do.
If #1 <#1> determines we're building an end-user oriented product and #4 <#4> says web is a requirement, should it be a goal to configure EVERYTHING via web? Somewhere in the code, I saw that someone (Dave?) said the simplicity of #ifdefs is strong, but that's true only for a developer. Right now, saying that channel 1 has 10 2812's followed by 20 SK6812's with white channels requires not "only a recompile, but takes significant code changes. We need to build either a bunch of host configuration utilities (that means at least five OSes these days...,j some kind of config file parser and way to feed
Much of our upstream supports things like effects on LCD/OLED displays. Is that a needed goal for us? ; A ST7735 or ILI93171 is kind of "just" a more dense, less sucky HUB75, right? We may already have a screen present for configuration and debug. (See # 5) There's some reasonable argument for running custom effects libraries on an LCD in your hand - especially for effect development or validating that your Clark Griswold Christmas Effects plays well on a screen in your hand before you get on the ladder.) Some of the libraries used (or at least siblings to them) already have backends for this, but I haven't seen anything punching it through in NightDriver.
We really need a list of how definitions are used in this project. For example, This <https://www.aliexpress.us/item/3256803715519232.html> is a colored matrix. This <https://www.aliexpress.us/item/3256801772174048.html> is a colored matrix. Electrically and at the protocol level, those products have nothing in common. We can (and maybe should) let effects work on either one and hide the differences. But the pixel size is absurdly different and the scaling rules are different, so there's a good argument for just giving them different names and keeping them separated. The current convention is really confusing. For example, as I understand it, "Mesmerizer" seems to be both a physical board and a collection of effects. Thus, Mesmerizer can run code that's not for Mesmerizer and Mesmerizer can run on controllers other than Mesmerizer. A Mesmerizer seems to be built to work with a matrix very well, but I've been unable to get questions about running a matrix on it answered. (There's something about a woodchuck, too... )
Unifying that documentation and text in the code (even if not the code itself) is a good opportunity for a non-programmer to help. Any takers?
Encompassing some of the above, we should probably have a comparison to 4-6 of the most popular GitHub (or otherwise Free Software) projects in this approximate space, explaining why a user should pick one over the other, what each does better, and what features are mostly likely found to be inspirational or aspirational for future development. Do we strive to be the lowest computing power per megapixel (e.g. 12 channels of 600 Neopixels per strand + 60fps in 8 chained 64x64 HUB75's on a $20 controller , have the most featureful effects libraries, the easiest to configure for a non-programmer to build effects for (lol, no), the easiest out-of-box experience, or whatever. That would help both potential users decide if we're right for them, as well as provide a table for devs of things we think are important and things that just aren't strategic.
THere's enough conversation starter for now. Maybe there's a clear vision for all this somewhere and maybe there's just not - and that's OK for a project this size, age, and complexity. Maybe I just need to better understand it, but I'll bet I'm not the only one confused about a lot of these external dependencies and interactions. Can we please kick around some guidelines to get us all pulling in the same direction?
I don't want to do mission statements or such, but I could use some serious help understanding what we're building so/if I can be a part of it. I know that some of the above sounds like it's wandering into the weeds (or being dragged - I know I do that.[1]), but it has very tangible impacts in the code if we can make some hard decisions on actual product focus. If FastLED's addLED<> is a compile-time thing because it saves 7 bytes that were needed on ATMega and we don't care about ATMega, we can fix - or replace - that code if it ultimately lets is add and remove channels, strands, and bulbs from a web interface. (See, a tangible example from this very weekend....) So here I'm just tossing out some questions about what we already have, some adjacent problems and opportunities that might become growth opportunities for us, and completely new places we can go or choose to ignore. I hope we can get a good conversation going between the people that got us this far and the new generation that's trying to help carry the water.
Thanx for listening. I know I can be a lot.
[1] Similarly, I know I ramble. Feel free to ask for clarifications, but please resist the urge to remind me of the situations that make that true.
—
Reply to this email directly, view it on GitHub <#340 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AA4HCF7LIJN6JZGHC46JNATXOUZ6FANCNFSM6AAAAAAZ43QUOQ>.
You are receiving this because you authored the thread.
|
Beta Was this translation helpful? Give feedback.
-
Dave beat me to it (darn day job 😄) which is actually good. He summarized a large part of my view on things excellently, which makes my answer a lot shorter than it would have otherwise been - note I'm not claiming it's short. Just to clarify up-front:
Part of the questions asked seem to aim towards choosing one thing over the other. In line with what Dave said, I don't think things have to be mutually exclusive. My comments specific to the numbered questions are as follows:
3.a) I think Dave can give some input on what should be prioritized. I'd be surprised if frame/refresh rate is not pretty much at the top of the list. :)
As a general statement, I'd say that if there is a particular scenario anybody would like to focus their efforts on, then let's just discuss that in concrete terms as a possible addition to the project. If I'm brutally honest - and this is a very personal statement -, I'm actually tempted to just say "let's shut up and create the code" (or documentation, as the case may be). In my experience, having code to look at is a great way to boil down floating conversations to their core. If we don't like what we're looking at, we can always decide not to merge it. :) |
Beta Was this translation helpful? Give feedback.
-
Just to the priority point: Top priority is to get the code ready for the Meserizer 1.0 release while not regressing any already-working projects.
I think we’re in good shape, but the UI will likely be the gating factor. I need to spend some more time on some kind of a spec for it, but I hope I’m not holding it up too much!
My plan is once the boards are available in quantity is to first make sure all the contributors have one and then put the rest on Amazon. I’m thinking I’ll offer them bare and as a kit that includes a matrix and power cable. The goal is to get them into as many hands as possible, and of course I’ll do a video to support it once they’re available.
Major gating factor on that point are two parts, a voltage regulator and a simple 4-pin header block. May have to enlist some help in finding exact replacements, as I’m loathe to make 200 boards of something I’ve never tested!
- Dave
… On Jul 5, 2023, at 7:56 AM, Rutger van Bergen ***@***.***> wrote:
Dave beat me to it (darn day job 😄) which is actually good. He summarized a large part my view on things excellently, which makes my answer a lot shorter than it would have otherwise been - note I'm not claiming it's short.
Just to clarify up-front:
When I said "those are good questions" in the comment on PR #330 <#330>, what I meant was that they're questions I also don't know the answer to. As I will explain, I'm not convinced all of them need to be answered.
By definition, any statements I make about picking up the gauntlet in this or that area, and creating code we can look at and consider are in response to questions asked by @robertlipe <https://github.com/robertlipe>. However, they should explicitly be interpreted as general statements - not as stabs at the person who happens to bring the questions to the table.
Part of the questions asked seem to aim towards choosing one thing over the other. In line with what Dave said, I don't think things have to be mutually exclusive.
For me, one of the charming things about this project is that it provides a software solution that allows many different (ESP32) chips to control many different LED devices, running a pretty wide range of effects. To a point, one could consider NightDriverStrip a framework for ESP32-driven LED effects, with a pretty decent set of supportive services: device and effect configuration, an API-powered web UI, a Web Serial-based installer, etc. (In fact, I can't help being amazed at what "we in the broadest sense" manage to squeeze out of devices as small as the ones the project targets. But I digress.)
The way this is currently organized - a set of platformio.ini environment definitions combined with heaps and heaps of defines in globals.h and an effect list in effects.cpp - can most certainly be improved. Anybody who wants to take up that challenge is cordially invited to do so, as far as I'm concerned.
My comments specific to the numbered questions are as follows:
I don't think we know current user ratios, nor are we currently trying to get insights on that.
The API endpoints that have been created up to this point were created for two reasons, more or less in this chronological order:
to allow NightDriverServer, one of this project's sister projects, do its thing.
to facilitate an on-chip web UI that is currently under development. In the context of this purpose, the API endpoints are ahead in existence of the UI that uses them. That comes with the risk of them not being (fully) used in the end, but it greatly simplifies the UI development if they are.
With that, I'd say the primary aim is now to create a feature-rich single-page web UI, backed by a "developer-friendly" API. At the same time, if someone wants to enrich the API to support another purpose in a way that fits within applicable hardware limitations, I wouldn't reject this up-front either.
I don't think we know these ratios either, nor do I think we have to, for reasons indicated.
We currently target what we currently target. Each of the PlatformIO configurations that are defined aims to create a configuration that makes a certain set of effects with a certain set of supportive/management features run on a specific chip with a specific type of display hardware. In some cases (Mesmerizer, Spectrum) the point is to show a set of different effects on "generic" LED hardware. In other cases (Atomlight, Umbrella, Fanset) the configuration is created to run in a very specific physical context.
As far as I'm concerned, anybody who wants to extend the set of configurations with something distinctly different (again, in terms of a chip/device/features/effects combo) is welcome to do so.
3.a) I think Dave can give some input on what should be prioritized. I'd be surprised if frame/refresh rate is not pretty much at the top of the list. :)
I'd say no - it currently isn't in the practical sense, either.
I'd say no, and I also don't think that's realistic from three angles:
There's quite a lot that currently has to be decided at compile time (template parameters, for one...). Some of these we may be able to navigate around, others I don't think we can.
We'll run out of on-chip resources well before we've made everything configurable that could be.
Many things are so low-level, and require such detailed knowledge of the hardware, that someone wanting to change them will be operating at the source code level almost by definition. In fact, I'd expect them to think a "well-intended" UI for changing is a thing in the way of getting things done efficiently - but maybe I'm projecting.
I'd say we don't need to make this a needed goal, but if someone submits the code and PlatformIO configuration to create a setup that uses them and does something beautiful, I'd be happy to merge it.
I don't experience the confusion expressed - certainly not to that level -, but I may have been blinded by "the evil I know". If someone is willing to clarify definitions in documentation AND code, then I'd urge them to please go ahead.
I'm not sure if I understand what this would add. I think the current documentation, the list of "task" issues and even the code give a good impression of where we are and what the current thoughts are about where we're going. But again, and by now predictably, anybody who is willing to take the time to draft a comparison/explanation like that is welcome to do so.
As a general statement, I'd say that if there is a particular scenario anybody would like to focus their efforts on, then let's just discuss that in concrete terms as a possible addition to the project. If I'm brutally honest - and this is a very personal statement -, I'm actually tempted to just say "let's shut up and create the code" (or documentation, as the case may be). In my experience, having code to look at is a great way to boil down floating conversations to their core. If we don't like what we're looking at, we can always decide not to merge it. :)
—
Reply to this email directly, view it on GitHub <#340 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AA4HCF2ZOP4B4Z3GXWMZLGTXOV6BPANCNFSM6AAAAAAZ43QUOQ>.
You are receiving this because you authored the thread.
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
First off, I wanted to welcome all the new folks who have joined the project in the last month or so since the video came out. About 200,000 people have seen it now! Glad to have you aboard.
https://youtu.be/X3V4gxd20FM
Second, I wanted to thank everyone who's been active in the project of late contributing. I realize it might take some time to get used to exactly how we do things, and the sometimes historical choices that we've stuck with, and appreciate your patience!
One thing I loved about Microsoft in the olden days was that most everyone was so smart that arguments got resolved based on who was right, or whose idea was better. And that's usually the case.
If you've noticed that Rutger seems to run things around here, that's largely because he does. He's been running everything source code related for the channel for years now, and I couldn't do it without him. Which I know for sure because I didn't do it without him before he sigjned on! I'm a one-man-channel on the video side, so that's enough work in itself - everythign Rutger does is a bonus. We're all indebted to his efforts, especially me!
Remember, this project started as blinking one LED and has been "revised" continually for about 5 years. Which is to say, it wasn't designed as a whole, it evolved. Some of the code is what you'd throw together on a weekend to make something work... because that's what I was doing at the time. Some is much better thought out, but there are always vestiges of that initial hackery. Feel free to fix 'em!
The code has reached a real level of maturity in the last few months, so even if it's not how I'd write version 3, I'm pretty happy with it for a v1 product!
One philosophy we've adopted is that there are three kinds of errors: things you can safely ignore, things you can recover from, and things you reboot for. For example, WiFi not connecting is non-fatal and can be retried, whereas drawing to a pixel outside the matrix bounds should trigger a fatal exception. Sticking to this hardcore approach means Nightdriver is pretty solid.
Keep an eye on the Issues section as I'm going to try to open new work items that will give folks opportunities to add some new effects and features!
Cheers and thanks,
Dave
Beta Was this translation helpful? Give feedback.
All reactions