1 Research1.1 Incentivized littering experience - Examples from the lecutures1.2 Incentivized littering experience: Project BB as own example1.3 Playful sports habits: Project Vergardening as own example1.4 Playful interaction as a business1.6 Is it successful then?1.7 My personal connection to gambling1.8 Raffles in year 2020 🦠2 Tinkering2.1 Coin detection2.2 Visual Reward - Choice of light2.3 Choice of interaction - LDR Sensors for age-detection2.4 Choice of interaction: We need a trigger2.5 Audiovisual Reward - Sound Design2.6 Analog-visual feedback3 Design & Build3.1 🛠 Ingredients3.2 Tricks used in the code3.3 Showcase video3.4 GitHub Repo4 Reflection6 Next steps
During the lectures it was interesting to follow the given exampes - especially the both examples of Volkswagen came up with regarding littering.
The Fun Theory 3 - an initiative of Volkswagen: Bottle Bank Arcade Machine
The Fun Theory - an initiative of Volkswagen. This is one of a series of experiments for a new brand campaign of VW. Fun can obviously change behaviour for t...
During the Blockchain Minor last year I got the opportunity to work on a field assignment in corporation with the team behind Project BB - an AI-based beach cleaning robot.
My part in this project was it to look deeper into the gamification part and turning the robot itself in some kind of ecosystem - in the best case using tamper-proof Blockchain technology so the robot with the vision to turn the robot(s) into autonomous beach cleaning instances, working not based on some government fundings, but thanks to people interacting with them, both in terms of feeding trash and playing a trash game to detect new patterns of beach litter.
Yet another example I wanted to show deals with the Verademing Park in The Hague. In the project we dealt with the question, how to create a meaningful connection between people in a park with otherwise relatively isolated communities.
In our concept, we follow a clearly incentive-driven and gamified approach, as you can see in the following video:
While the given examples are much rather showing full ecosystems containing playful interaction in some touch points, than actual playful machines, I decided to set my focus for this challenge into another direction and go to the probably most addicting machines in the industry.
I was always inspired by slot machines in some way. Although I always managed to keep professional distance regarding addiction, it was more the fascination about how sounds and visuals can be used for stimulation, active persuasion and although an interaction might follow a simple, repetitive logic yet invite the users for hours of engagement and fun.
In general I found that these slot machines are not necessarily about the game itself, but the playful interaction made possible by sounds, big buttons and other ingredients. Let's find out what they are in the next section.
Regardless which direction in playful interaction I might be choosing after all, I think this is an interesting starting point as playful interaction should not be about designing a game, but how to nudge people in a playful way.
In the following I want to summarise my findings from the desk research.
In the report, slot machines are described as being close relatives of casual games, "games that generally involve less complicated game controls and overall complexity in terms of gameplay or investment required to get through a game".
- Minimal time requirement to learn the controls (Intuitive)
- Time commitment can reach from seconds to hours (Scalable)
- Instant reward feedback (either financial, trough points or audio / visual rewards) for "winning" moves (e.g. coin dropping sounds, uplifting beeps suggesting progress)
- Verbal reinforcement from virtual characters that may speak to the player
A video of the slot machine I built, a closeup. See https://www.5volts.org/home/atmega-alien-themed-slot-machine for all of the detail.
Something you cannot misinterpret is the smile of these guys in suits. Bankers? No, the Gauselmann's family business' revenue is basically reliant on UX. Their business is the manufacturing and development of digital and physical gambling experiences.
But would you rather like to be one of their average customers playing their games while gradually going bankrupt and developing an addiction towards the machines? Or celebrate the joy on the other side of the machine like the handsome guys in their suits? Fun fact: Actually I do have a lot in common with those guys.
Back in the days, at our local church community's summer event / Christmas market I was part of the raffle team. We raised prizes from companies and then sold lottery tickets for 0.50 EUR. Every ticket was a winning ticket. So rather ethical gambling here - but the variable reward component is present for sure, yet in a very analog way.
The summer event and christmas market will take place this year again and although not being involved for years anymore, I was asked for advice on how to proceed this year. To explain, how it works in general, this is the basic journey:
- 🙋🏽♀️ Attracting customers in the first place
- 💰Collecting coins / cash from strangers
- 🧒🏽 How old are you? Child / Teen / Adult?
- 🎟 Letting them pull a random number from a pot full of lottery tickets
- 🎁 Handing over present
This year however, due to the COVIDstances, the events will be taking place in a decentralised way, meaning instead of in the district's central market spot, it will be spread across the entire district.
The organisers see the biggest challenges in step 1-3 and asked me to come up with something funny and engaging that fits in the given circumstances.
The exchange of real coins under the current circumstances is not conceivable. I therefore thought about how to insert, check and accept a coin.
The first direction I thought might be possible was the integrated hall sensor embedded in the ESP32 - detecting the different between different coins / metals. This option however turned out to be infeasible as hall sensors only detect magnetic fields and coins are usually not magnetic.
coin detection with hall sensors or others?!
Hey guys, I'm trying to work on a project that would need to be able to detect a coin.· I don't have a magnet available to try, but I'm certain that most coins are not magnetic.· What I was looking at doing was using a hall sensor and magnet, set
This thread however gave me good input:
- Slot/vending machines even use magnetic sensors to filter out fake coins
- Inductive proximity sensors can be used to detect and distinguish metal instead
- 💡Most slot & vending machines use weight sensors
Having the weight sensor (HX711) as safe backup in place, I carried on with my thinking process and decided that before over-engineering a minor detail in this experience, for now it might be more interesting to know whether someone dropped a coin at all and figure out what to do when this event triggers.
I found that using a sound sensor, the coin event can easily be captured as well - That's what I assumed in the beginning, but my microphone sensor challenged me quite a bit.
It took a whole bunch of testing and debugging including the removal of the microphone front cover. As you can see, the microphone outputs a constant value of ~63 and only shows an amplitude, when you hit it very hard. But minor variances were not noticed.
As you can see in the example on the right side however, the value should vary quite a bit, so I assumed my sensor was malfunctioning.
For my use case I found a treshold value just over 66.
After further reading I figured, that the sensor indeed was not malfunctioning but I missed adjusting the sensitivity potentiometer, the golden screw on the sensor module.
One turn with the screwdriver later, the microphone recognised every coin insertion. Happy days! 💸
Did you recognise the melody? With my "client" in mind, I created a small "AMEN"-melody.
Client feedback: "It sounds a bit sad, maybe a "Haaaa - lle- luj - ah" would be better"
After implementation: Yes, it is:
After more testing, I realised that suddenly I got constant values over 1000 which led to constant beeping. Eventually I noticed the mic sensor suffered from a loose connection, so I decided to solder it to a longer cable which might work better anyways in the final product.
So far, in the workshops we only worked with one matrix. The module I have here however consists of four different led matrix modules in a row but with the basic library it was not possible to control them separately.
After playing around with the MD_MAX72xx library, I felt a bit overwhelmed by the amount of functions and following the mantra to "use examples only once you fully understand them"
I continued my research. Especially due to the fact that my LED Matrix was not yet scrolling the text properly (inverted direction, inverted letters, with delay).
Watching some videos got me to the more logically organised MD_Parola library. The issues with my particular panel however persisted so I dug deeper and eventually found the solution had nothing to do with adjusting the frame delay or modifying variables that sound like they might influence the direction:
I had to change the following parts in my code. Sometimes, there are minor details coming with major impact.
#define HARDWARE_TYPE MD_MAX72XX::
Version 1.0 Video Version 2.0 Video Sprites Animation Video Library Documentation Parola is a modular scrolling text display using MAX7219 or MAX7221 LED matrix display controllers using Arduino. The display is made up of any number of identical modules that are plugged together to create a wider/longer display.
Our raffle is not trivial! We have prizes for 🧒children, 👩🎓teenagers and 🧔grown-ups. The prizes have different number ranges.
Of course, we might solve this by using buttons for each of our "target audience", but hey - It's corona-time 🦠and a button touched by 1000s of people is probably not the best approach.
So what if you could approach the sensor in a more natural way?
The sensor I had in mind at first was the HC-SR04 ultrasonic sensor. People could stand underneath it and using size detection, it would determine our "target audience". This however would discriminate little people and overvalue some teenagers that might end up going home with inadequate presents 😈.
After some experimenting I found that using the standard LDR sensors, we could achieve a similar result. Hovering an area is safe and so we can paint certain areas with an icon for children / teens / grown ups and detect a hover action. However with different light conditions I found a fixed sensor threshold value is not optimal. At the same time there was a variance between the sensors, although using the same resistors. I found the min() function, which works perfectly to determine the lowest value of all sensors in the setup function and based on that set a threshold value based on percentage, which is just below the non-hover light value.
Thus the detection would work during a light summer party such as good as in a dim Christmas ferry setting. Using the 🔘Reset button, the light sensors can now be calibrated.
childValue = analogRead(prChildPin); teenValue = analogRead(prTeenPin); adultValue = analogRead(prAdultPin); int minTreshold =
Parola A to Z - Mixing Text and Graphics
The key function of the Parola library is to display text using different animations. From version 2.7 onwards, Parola allows user code to manage mixing graphics with the text. The extensions to the library and what they do is the subject of this article. What's the Problem?
When comparing digital and classical slot machines, there is some minor detail I consider to be interesting.
The trigger in nowadays slot machines is just a button, while back then, most machines used proper physical triggers on the right.
My hypothesis on that: With a real trigger bar, users are more in motion, have potentially more time to reflect on what they are doing, while a simple button press is less likely to interrupt the flow of gambling and throwing out money.
For the "ethical" way of gambling I am pursuing here, I think the classical way is more fitting.
But what would it take to make the whole thing more engaging 🎉 and covid-safe 🦠?
When thinking about church in general, to me it often carried something mysterious, "magic", maybe even intangible - both mentally and physically.
Thus, the idea I came up with was to use a "virtual magic wand" as trigger for the machine. This would add the possibility to everyone to try their luck with "individual" sway gestures, just by using their hand. This increases the perceived autonomy and improves the overall memorability because what you pay for is "one shot" for one win.
I decided to use the ultrasonic sensor for that, as it works very precisely and in a higher range than the light sensors used for the age detection.
I have been using the piezo buzzer for a few times now and as stated above, included a few chimes that fit the experience. However I thought for enhanced playful interaction it might be cool to leverage a real speaker for that purpose.
However as I am always aiming for "embedded" solutions and did not want to go the Processing path just yet (which would be easy to realise with the Processing MP3 library), I did some further research.
🔈How about connecting a speaker to an Arduino directly? As we found earlier, this would require an AMP chip (besides keeping sound files small, as the arduino has limited storage)
🍓How about the Raspberry Pi? Could we connect the speaker to a pin?
🍓Could we grab a signal from the serial port and trigger to play a file?
As it turned out, this would be possible but probably more work for achieving less, while processing gives more possibilities such as output of visuals as well.
Bottom line, I concluded that for prototyping purposes, Processing might still be the best way to go for now.
Piezo buzzer for the start
For testing purposes and safe backup for minimal hardware I decided to still use the Piezo buzzer after all. It would also be possible to make Processing send a message to the serial interface, so the Arduino knows to disable the buzzer when Processing mode is active.
Certainly, this servo motor hyped me alot
Last week, after the workshop I decided to tinker further with the servo motor and built a small pill dispenser for my daily vitamins.
Curious on what else might be possible I suddenly had a throwback to a mattress store nearby who had a cardboard standup person standing in the shop display. The special thing about this particular cardboard standup: The person's arm was moving by 360 degree with constant speed . The fact, this movement looked so unnatural anatomy-wise, this stayed in my memory.
Small reminder: I am basically designing for the church. So what if the machine had a Jesus with moving arms?
Brilliant ! 🙌
- Raspberry Pi for remote development
- Arduino R3
- Microphone sensor (Coin detection)
- 3x Light sensors ( For contactless selection of age)
- Ultrasonic sensor (virtual magic wand)
- Servomotor (Jesus' hand)
- 8x32 LED matrix (MD_MAX72xx & MD_Parola library)
- Plenty of cables
- A free banana box from ALDI
By using a global variable for "journey steps", it was easily possible to test the different steps in the experience separate from each other
What seemed easily to build in the beginning turned out to be quite demanding as every new component I used came with different hurdles, such as:
- the Light-sensors interfering with the LED-Matrix due to the placing on the top (I solved it by adjusting the angle of both components and including an adaptive lighting function that regularly updates the threshold. This was also needed, because one of my light sensors always showed values that were too low, although using the same resistors and overall lighting conditions.
- the microphone sensor not giving any significant values in the first place for the reason there was a tiny potentiometer I didn't notice. The second time however I faced a desoldered contact. Diagnosing such stuff often takes a whole lot of time.
- The LED matrix being more complex for my application than expected, as the according library requires fiddling around with char arrays and the conversion and injection of int values into char arrays didn't work in the first place, why I wasted alot of time on halfbaked (but working) solutions
However I can clearly say that all these problems were huge opportunities to spend time on my coding skills, read and grasp Stackoverflow and refactor pieces of the code whenever there was a tipping point happening. All these bugs in between helped me to understand how important a debugging function is and how to control it (not by using the delay-function, but in an asychronous, non-blocking way, using millis()). Having all the relevant information at hand,updated every 1 second was very handy to proceed faster with the development process.
After all I would have wanted more time of course, to bring the whole thing to the next level. For me the next level would be to connect the already integrated Raspberry Pi to a HDMI screen and output it in an automated, embedded way, e.g. to Processing so it is safe to exclude user errors. This next step is indeed planned, as the personal running the raffle at the event might want to see what happens on the machine (current number, reset the state when errors occur, supporting the machine with rich audio sounds (because that's indeed possible with Processing) - you name it.
But this will take me another week for sure and I guess the project itself so far is well enough, to showcase a validated, standalone working MVP and at the same time give a clear direction to the next steps.
- Processing interface displaying what happens on the machine
- Current lot number
- Reset the state when errors occur
- Supporting the machine with rich audio sounds using Processing MP3 library
- Optimize housing
- Color prints
- More solid materials
- Embed LED matrix (no interference with light sensors)