Or, as our parents’ generation might’ve said “TANSTAAFL” (There Ain’t No Such Thing As A Free Lunch).
A few years ago, I switched my hosting option for this site from Amazon AWS to a service called Netlify. I didn’t mind the few dollars per month it took to host the site on Amazon, but the process of getting my updates posted required a number of manual steps, that if done incorrectly, would render the site unreachable.
After screwing up a few times, I realized I needed to spend some time researching and implementing a modern, automated workflow. That’s when someone on the Jekyll forums mentioned Netlify. Netlify wraps up all the automation in a smooth package for the low, low price of nothing for non-profits and vanity projects.
Netlify made their money getting nerds like me to use it for our personal projects and then hoping we’d recommend it to our bosses when we need such a service. No harm in that, companies do it all the time with students and educators.
I was happy using Netlify and I really have nothing bad to say about the service as I used it.
This week a rather disturbing post came across Reddit. A free-tier user with a hobby site, like mine, was sent an invoice for $104,000 because of a traffic spike that sent hundreds of terrabytes of requests to his site. The user, who is a web developer, said the requests were all after a copy of a 20-year-old song by a Cantonese singer. Hmmm.
The developer contacted the company about the invoice and they offered to reduce the charge to $21,000 and after some pushback, they offered “only” $5,000. There is no argument that there was a traffic spike, although there is some speculation about why this happened at this time. The CEO came onto HackerNews and said they’ve cancelled the invoice and they do that occasionally when this specific problem arises. So, they’ve known this can happen, but did nothing to prevent it.
The problem is that there are currently no controls on Netlify to prevent these kinds of traffic spikes that can disable a website and lead to crazy invoices. Amazon AWS, on the other hand, has fine grained controls over such activity. The account I used when I previously had this site on AWS was configured to send me an email if my monthly bill ran over $30 and to stop all activity if it hit $40.
That was the driver behind me moving back to Amazon AWS. Now, to spend a few hours at the keyboard.
Here I am on a Friday night, trying to remember all the steps to getting a static site running on AWS with SSL enabled. I’ve done this a dozen times, but it’s been a few years. When it was all done, I’d spent about 4 hours going through all these steps several times for several domains:
If you’re interested, here’s a good summary of the Github Actions setup. There was one change I had to make in AWS that is mentioned in this StackOverflow comment regarding ACLs. In addition to that change, there is a setting in S3 buckets to re-enable ACL access to the bucket. I don’t know why AWS has all these settings with similar purposes, but all I can do is write it down here for the next person to discover. Good luck.
If you’re reading this, it means that I got everything moved over to AWS and configured correctly. I’ll go back to paying $4/month knowing that I won’t ever get a $104,000 invoice.
]]>Caution: There is no TL;DR on this one.
Our ISP has been running fiber in our neighborhood for about 6 months and they’re close to having things wired up. I have a pedestal in the alley behind the house and as an existing DSL (yes, you read that right) customer, I’m one of the first to get the upgrade. In anticipation, I’m doing some upgrades to my home network.
OK, first about the DSL. I had cable internet for about 18 years from the local cable TV company. In 2001, when we bought this house, the 20MB down/2 MB up connection was pretty amazing. There were always issues, but we learned to deal with them. Over time, that speed went up to 50/5 and then 100/10. Along that same period, the company changed from “Weatherford Cable” to “ClassicNet”. Then from “ClassicNet” to “CEBridge”. A few years later, it changed from “CEBridge” to “Suddenlink” and that’s when things really started to go off the rails.
When Suddenlink took over, I had monthly downtimes lasting hours. I could count on an hour-long outage on Monday at 11AM as the local employees brought down the network for maintenance. At least once a quarter, I had an outage of more than 1 day. A couple times, during ice and snow storms, the entire community were without internet for 4 or 5 days. This was beginning to have an impact on my ability to work. During those outages, I would tether my work laptop to my phone, but that’s really not a reliable or scalable solution.
In 2019, I started looking at my options. There are several fixed-wireless ISPs that operate in the area. I know the owner of one such provider and I asked him to see if I could switch to his service. Unfortuately, there was no place on my property where we could locate my antenna so it could see his antenna. There are quite a few trees in our neighborhood and these were blocking line-of-sight to his tower.
The local telephone company also provides internet services. I asked about their packages and they offered both fixed wireless and DSL. The DSL came over the twisted-pair copper lines and could provide up to 25MB down/5MB up. I know that seems insanely slow these days, but I talked with their network engineers and they said their DSL hadn’t gone down in over 3 years. I decided to roll the dice.
I kept my cable internet account for a few months while I set up the DSL and experimented with it. True to their word, I had no interruptions in service. In January 2020, I cancelled my cable internet.
Think about that. January 2020. What was happening about that time? People were saying stuff like Have you heard about that virus going on in China? Well, the cable company did 2 things at that point:
They sold themselves, once again, and became Optimum
Optimum shut down the local office, so trouble calls were now routed through a central office in Arkansas. I learned from friends and family that still used them that sometimes it took 3 weeks to get a problem fixed. Meanwhile, all those new telecommuters didn’t have internet.
Talk about dodging a bullet.
My ISP has a combo modem/router box. They do all the management on the box and they do not allow customers to modify settings. They made one small concession to me, which was to bridge one of the ethernet ports on the modem so that I could attach my own network to theirs. I bought a set of Google Home mesh routers and placed them around the house. I don’t like the Google router, to be honest, but it sucked less than the Netgear Orbi mesh that I was replacing.
The problem with the Google router is that it can only be updated with a mobile app. That app hides some of the crucial settings in strange places and sometimes it would just refuse to connect. And, in typical Google fashion, there was no customer support.
What I wanted to get was a modern router similar to the old workhorse Cisco/Linksys WRT54G that I had years ago with a custom Tomato or DD-WRT firmware. I could tweak a lot of settings to my liking and it always worked.
House network for personal devices with adblocking across entire network.
Separate network segment for IOT devices that can’t see house network. I’m not comfortable with the amount of telemetry some of these shitboxes send home and I’d like to throttle it without killing my home network. Would run on 2.4ghz because a lot of IOT stuff doesn’t work on 5ghz.
Have a lab segment for work.
A guest network that can be accessed up to 24 hours without me having to hand out a password, but isn’t so convenient that neighbors just leech off my network.
As it happens, my director at my new employer is formerly from Cisco. He filled my head with ideas on how to achieve my goals. Naturally, it involved spending a bit of money. Toward this goal, I’ve purchased:
(1) 4-port mini-PC with 1TB storage and 32GB RAM. You can get these on Amazon or Ali-Express (if you’re willing to wait for international shipping). I bought the barebones model and installed Corsair memory and NVME modules.
(1) TP Link 8-port managed gigabit switch that supports VLAN and Power-over-Ethernet (POE).
(2) TP Link EAP 610s with POE for indoor use
(1) TP Link EAP 610 (Outdoor sealed) with POE
I didn’t want to disrupt internet access over Christmas while our kids were home, so I started trying to work on this away from our regular internet infrastructure. My wife is was on holiday last week and this week and will be back to work next week while I’m still not back until January 8th. I wanted to make sure I had everything working in the garage before tearing out the existing connections. I told her that this is like trying to grow a tree from the limbs out to the leaves without having a trunk.
So, imagine me in the cold garage, sitting on a folding chair with a 3 foot (1 meter) piece of lumber for a worktable. I had a portable heater at my feet to hold off the chill as I fastened things together and configured software.
The day after Christmas, with the kids back at their own homes, I installed Proxmox VE on the mini-PC. Proxmox is a virtualization environment that will let me run a bunch of small services that normally would go on a Raspberry Pi. In the first VM, I installed OPNSense, which is an open-source firewall.
I had the wireless APs attached to the switch, which was in turn attached to the mini-PC in port 2. Port 1 (aka ETH0) will eventually go into the ISP box, where it will get a dynamic IP address via their DHCP server.
I was having a lot of problems understanding how to configure things like virtual adapters in Proxmox, as well as bridging them in a way that traffic could move about. I watched hours of YouTube videos from alleged experts in this field. I’m not sure if they’re experts in virtual environments or just really good at getting their videos to the top of searches for virtualization. Regardless, I did a lot of experimenting using a couple of older PCs that worked well in the confines of the garage. I thought I had everything working right and was ready for the moment to plug it into the ISP box.
I asked my wife if she minded me turning off the internet for an hour or two and she said she didn’t mind, as she was reading a book I bought her for Christmas. So, off I went to disconnect the existing network and plug in my new network.
I’ll save you the suspense. Nothing worked. As in didn’t work and didn’t work badly.
I tweaked settings and nothing would send traffic out of my local network. After 2 hours of this, I was frustrated and I disconnected the new network. I made so many changes trying to get it to work that I couldn’t even connect to the Proxmox environment at that point. I decided the best thing to do was to wipe everything and start over, taking my learnings the last 2 days to maybe do things right. So, I factory reset the mini-PC and then set it down for the day.
When I plugged the old network back to the ISP box, nothing worked there, either. I powered everything down and restarted it all back in order, so that the ISP box would renew its lease to the ISP DHCP server. That didn’t work, either. I factory reset the Google router and the mesh nodes as well. Nothing would work.
<img src=”this-is-fine.jpg” alt=”image of man screaming at a tree”>
At this point, it was 5:30 PM and I was tired and angry. I was definitely ready for an adult beverage. I knew my ISP office was closed, but I didn’t know what else to do, so I called their technical support line. They have a 24-hour rollover to a remote management company, so I knew I could at least maybe get them to reset things from their side.
That’s when I heard the message.
We apologize for the inconvenience, but our network connection is not working. Please be patient while we fix the problem.
That’s right. For the first time in 4 years, my ISP had an outage. And it just so happened on the very afternoon that I was making big changes to my network.
As the kids these days say, FML. The ISP network came back on about 9 PM and I finished reseting my Google router. Which, it turns out is another aggravating thing about the Google setup. If you don’t have an upstream connection, you can’t configure their router with their app. I suppose it’s made for dummies, not anyone wanting to do anything remotely technical.
On Wednesday, I reinstalled all the software on the mini-PC and made sure the APs could communicate. I left everything alone until Thursday, when my wife was leaving for the day to go see her best friend.
On Thursday, I made sure everything still worked correctly when not on the main network. With that done, I repeated the process from 2 days prior. I removed the Google router from the ISP box and unplugged it from power. I plugged in the mini-PC and tested the connection. I had a few minor problems that I needed to update to get the firewall and router to work correctly. It’s amazing how things work when they actually work.
At this point, I disassembled everything and started running ethernet cabling to 2 of the places where the indoor APs will go. This involved digging in the attic, which has fiberglass insulation. Yay. Once I had the cabling ran to those points and the mounting brackets installed, I attached the APs to the ethernet and… Voila! I had 5GH Wifi-6 served throughout the house.
With everything working, I backed up the server and stored the backup image on my local PC. Then, I installed the AdGuardHome ad-blocker on the OPNSense firewall. In doing so, I didn’t pay attention to the installation options close enough. I ended up overwriting the port ID of the main administration tool with the AdGuardHome tool. Fortunately, this was easily fixed with a couple of edits. I spent about 30 minutes digging through directories with the command line to find the configuration setting that controls which port does what.
So now, I have ad-blocking on the entire house network. Woot.
There’s still work to do, but I’m going to take a few days off until I start mucking about with things.
At this point, I have 2 SSIDs serving the house. One is for IOT traffic (Roomba, TVs, thermostats, etc.) and the other is for our personal devices. These are not on a VLAN, so both networks are getting the full firewall treatment. The IOT network is on 2.4GHz and the house network is on 5GHz. I will need to set up a VLAN on the firewall, switch, and APs to get these separated.
The outdoor AP is not installed where it needs to go. I’ll wait until we’re not getting precipitation and 30 MPH (50 kph) winds before I install the outdoor AP on top of the 30’ (10m) antenna mast. For now, it sits in the garage, serving up internet to our vehicles.
I have a lot to learn about routing in the days ahead. I have the simplest routing setup now, but I know I want to do more complicated things. But, routing is one of those things that if you screw it up, you have to start over from scratch. Best to learn more concepts before experimenting on a live network.
I want to install a second VM with some Linux distros that I want to try out. I want to play with NextCloud and a few other tools.
Eventually, I want to replace my dedicated Arlo wifi camera system with one that uses POE ethernet cameras. The Arlo is OK, but it’s very limited.
The big TODO and the typical technical debt, is that I only have this written down in a notebook. All the details of how it all works together is just a scribble on a sheet of paper. I need to write down all the procedures to make the network operate for the inevitable day when I’m on a work trip and my wife is dealing with a non-functioning network at home. Bah, kick that can a little further down the road!
Finally, the point of all this work is in preparation for fiber internet. I see the work trucks at the nearby phone offices and their trucks are still pulling fiber bundles along the streets. I’m hoping to get set up in the next few months.
]]>I wanted to add Google Analytics to this site as an experiment for some work projects. Getting GA4 going and configured where my Jekyll theme generates the correct content is taking a couple hours to get straight. So that’s what’s driving this update. I’ll likely make similar changes to My72MGB as it uses the same Jekyll template as this site.
I don’t post much personal information on here because I value my privacy and the privacy of those around me. The short update is
There you go.
]]>I’ve noticed a recent trend on various social platforms of people making updates about where they can be found, away from Twitter. I had a rather enviable early user name there (@ericc) that was frequently tagged by fans of a couple of other “Eric C’s” (Church and Clapton). Perhaps Twitter will come out of its spiral, but it’s unlikely, so here are some other ways to contact me, ranked in their likelihood to reach me.
The domain is pobox.com and the account is my first name plus the first letter of my last name (i.e. the first 5 letters of this website). I have aggressive spam settings on that domain set as high as they go with filters for domains based on country of the sending system. So, if you really want to email, send it as plain text and use a well-known sender, like gmail.com, outlook.com, etc. and don’t send it from any of the countries with a reputation for scams.
I have a LinkedIn profile and I will respond to invites if I know the requester. I check LinkedIn about once per week, so don’t expect an immediate response.
If you have my cell number from the last 22+ years, that number still works. It starts with ‘4’ and ends with ‘9’. I have a Google Voice account and that number begins with ‘4’ and ends with ‘8’. I no longer have a landline, so if you had a number with ‘663’ in it, that number no longer works.
If you use Signal Messenger and know my cell number, I will connect.
I’ve been on Reddit for over 10 years. I mostly reply to pun threads on r/all and interact with people on a couple of hobby-specific subs. I don’t use it like I would a social network like Twitter or Facebox as nobody there realizes that I’m a :dog: in real life.
I created a new profile so I can get messages there.
I use Slack frequently for work, but I haven’t got into finding communities with it. If you have a Slack workspace where you’d like a joker such as myself, send an invite.
Like many other refugees from Twitter, I have created a Mastodon account, but I haven’t yet dug into the culture and behavior of the service to see if it suits me.
I’ve been on Discord for a few years, but I really don’t socialize there. It’s mostly out of need, but I think I will likely increase my presence there.
ecdroid#5041
I have a Facebook account for the simple reason that my first wife has a memorial page there. I log in perhaps once every few years. I have no content other than a few listings in Marketplace. I don’t accept new friend requests.
I would love to use Whatsapp for the community and its near-standard status as a messaging platform, but I’m not using a Meta platform. Similarly, Instagram.
I don’t use services with proprietary cryptographic techniques and I certainly don’t use services headquartered in Russia, so this is a non-starter.
]]>It’s been a busy year for personal stuff, but not the kinds of things that one puts on their website. So, suffice it to say, life is going on and doesn’t suck. Hope yours is the same.
]]>Note: There is no TL;DR on this post. You’re just going to have to read it. I started writing this over a year ago, but the advice still stands.
Photographers love their equipment and they love arguing about which brands are best. You can have a full-frame camera with the best lens money can buy, but it does you no good if you don’t have it when you need it.
The best camera is the one in your hand when you need to take a photo. In most cases, that camera is probably your phone. I’m going to go down the rabbit hole for a while to explain a few concepts and then we’ll pop out and talk about specific things you can do to take better photos with your phone. Please bear with me.
Let’s get started with the two most important pieces of equipment you have. Your eyes and your brain. Your eyes have great ability to adapt to the available lighting and sense subtle variations in color. As we age, that ability changes, which is where our brains take over. Our brains adjust the signals that our eyes see and process the visuals in a way that we expect or desire. It does this without us needing to do anything.
There are many analogues between our eyes, our brain, and camera equipment. It’s one of the reasons that pieces of gear have the same names as those in our heads, such as iris and lens. However, the digital version of our eyes and brains don’t have the ability to override what we think we’re seeing. And that’s where the disappointment you experience when you look at that photo later comes in.
Fortunately, we have post-processing. Or, as Snapchat and dozens of other apps call “filters”. We’ll get to that in a bit.
In the film days, we mostly used either a 35mm camera or what were commonly known as a “110” cartridge. A 35mm negative (or slide) was 35mm along the diagonal from one corner to the opposite side. 35mm is approximately 1 3/8”. Whereas, the 110 cartridge was about 21mm along the diagonal. While the 110 format was convenient, the lack of controls in the cameras usually meant that 110 photos were mostly low quality with poor coloring and “grain” in the image. Further, because the frame size was so much smaller than 35mm, it wasn’t possible to enlarge the photos beyond a 5”x7” print (roughly equivalent to an A5 sheet).
A modern consumer-grade digital camera, such as a Canon EOS or a Nikon D Series probably has an APS-C sensor in it. There are some variations, but most of these sensors are about the equivalent of 25mm. Unlike the 110 film cartridge, a modern digital sensor can record a great amount of detail in that space. However, as you try to extract more information out of the sensor (MORE MEGAPIXELS) you get “noise” in the signal, which is the equivalent of grainy film.
I talk about this because the sensor in your phone is usually about 10mm (.4”) on the diagonal. Further, the lenses on your camera have to fit in the depth of the phone, which is already thin to begin with. Quite clearly, the sensor size being smaller means you’re going to get more noise in your photos, with the insistence that marketing people have to promote this years’ thing being better than last years’ thing, you’ll see that the noise is just being amplified.
These physical limitations will impact your photos, but with a little bit of thought and preparation, you can take photos that you can post on Facebook, crop tightly to get the best composition, and enlarge to print on a full sheet of paper.
So, let’s get started.
We’ll start with one of the most basic, and most used, concepts–the Rule of Thirds. It’s not a Rule in the sense that it must be obeyed at all costs. It’s more of a guideline. There are plenty of good reasons to ignore it, but in the absence of a good reason, you can’t go wrong in following it.
Here’s the gist. Take any scene and divide it into a grid with two equally-spaced lines horizontal and two lines vertical. These lines will divide the scene into nine panels of equal size. Take whatever you’re interested in the most, and put it along one of those grid lines. It doesn’t necessarily have to be at the intersection of the grid, just along the grid line. Then, put the rest of the scene into the other two-thirds of the scene to give it context. Granted, you’ll have to zoom in or out a bit. Perhaps you’ll need to move a bit to bring your subject into the best possible framing. This requires a bit of practice and thought, but it’s a simple concept.
Here’s an example. I took this shot in November 2007. It’s a tree on a hill near a friends’ house, taken as the sun was setting. It’s a bit under-exposed, but that’s alright.
Here’s how Adobe Lightroom shows the image. Notice the thirds grid overlaid on the image. You can see that the tree isn’t quite on the right hand vertical grid and the horizon is pretty well centered.
Now, here’s a treatment with a bit of cropping, highlighting, and contrast adjustment. The tree is now on the right third line and the horizon is on the bottom third. I didn’t do much saturation adjustment, other than to tone down the colors of the grasses to draw the eye to the sky and the tree.
Is it a better photo? It’s certainly more dramatic and possibly tells a story without words. In case you’re curious, here it is, with the same adjustments and with the trunk of the tree centered horizontally and filling the center third of the frame.
Is it a better shot? For me, it has much less emotional impact.
There’s a psychological foundation to why this rule works but it’s not my purpose to explain it. As I said earlier, it’s a guideline or framework, and not a mandate. If you want to center your subject and it looks good to you, then fire away.
The manufacturer of your phone wants to take care of every detail of your photos. In fact, every one of them sells their phones on the image quality. One of the top phone OEMs loves to create marketing about how great their cameras are by airing commercials and putting up billboards with incredible photos and videos shot with their phones. It turns out that a billboard is actually a pretty forgiving medium for enlarging a photograph. A better test would be an archival quality print, sized 12x17” (A3 size) at 1200 dpi.
The sad truth is these great photos are just a set of pre-defined post-processing optimizations that each OEM keeps to themselves as a trade secret. The improved skin tones in Google’s Pixel 6 isn’t groundbreaking–it’s just them paying attention to that specific detail.
As part of the phone software manipulation, you’ll get tempted to use built-in filters on the phone. In many ways, these are fine shortcuts, but they limit the precise ability for you to tune the photo colors and exposure the way you may want.
The one real piece of advice I’m going to give on this, and I’ll elaborate in a moment, is to refrain from using filters that over-saturate the colors. It might look great to see all those vibrant greens, yellows, and reds, but they’re not real. They’re trying to trick you into accepting that this is what your eyes really saw, but it’s not. Saturation is a spice–used sparingly, it adds a wonderful flavor, but used slightly too much it ruins the dish.
If you’re not supposed to use filters, how do you improve a photo?
Your modern smartphone (post 2015) should have the ability to shoot a photograph in what is called “RAW” mode. This means the file that’s stored is the direct output of the sensor and not interpreted in some way to produce a JPEG or PNG image. With a RAW photo, you can use aftermarket software to do many subtle alterations to the exposure, white balance, and saturation levels that you cannot do on a JPEG image. Depending on the alteration, some of the changes don’t affect the image quality at all. If you have a JPEG image, you’re always adjusting the pixels within the image, because it’s already made a first attempt at interpreting the output of the image sensor.
I know this sounds like a huge leap, and honestly, it is. But, if you want to take and present exceptional images from your phone camera, you’re going to have to learn this step. But, once you learn these steps and you want to go to a DSLR, you’re already 80% of the way there.
When shooting RAW, you’ll have to content yourself that you’ll never have what you need on the phone to get the perfect image. You’ll have to go to your Mac/PC to use software like Lightroom or Photoshop to get the processed image. There is a mobile version of Photoshop that has basic adjustments, but it’s not a replacement for desktop software. You’ll have to set up a workflow where your RAW files can be filtered, processed, and saved as JPEG after you do your tweaking.
An entire book can be written on this subject, so I’ll just say that the concept you’re looking for is called “Digital Asset Management” or DAM. There are many excellent books on it.
I’ve explained big concepts, so how do you put them into practice? Here’s a bunch of pointers, sorted by no particular criteria. I hope you’ll find these useful.
Stabilize your shot. Holding your phone in your hand while trying to take the photo is just about the worst possible scenario. Your hand is a terrible mount for a camera. Invest in a cheap monopod (or tripod) and a camera bracket. Together, these are less than $30US on Amazon. There are dozens of models and any would be a better solution than your fingers. If you want to buy up, get a $100 gimbal to hold the phone.
Use a Timer. While you’re moving your phone to a stable holder, also delay the shot for two seconds if you can. Turn on the timer, press the button, and GET YOUR HANDS OFF THE PHONE. This will remove the vibrations and take a clearly-defined shot.
Zoom with your feet. This is one of the oldest pieces of advice for new photographers. Get as close as you safely can to the subject. Don’t rely on the “30X Zoom” on your phone. It’s not real. Even if your phone has multiple lenses, at best you’re getting 3X optical zoom. Those pixels are not as good from far away as they are up close.
Roll down the window. Oklahoma has beautiful sunsets. I can’t count the number of photos I’ve seen of a blurry sunset with something in the foreground. What is in the foreground? A splattered bug or the streak of a windshield wiper. Whatever the photographer thought was beautiful at the time has been ruined by whatever was on the window of their vehicle. Aside from the bugs, a window will distort the optics of the shot, so it’s best to just remove that extra bit of glass from the shot if you can.
As I mentioned before, I started this post almost a year ago because a friend really wanted better photos. I spent a few hours, guiding them gently down the rabbit hole and they’ve now moved on to a DSLR and doing great work. A DSLR isn’t for everyone, but with a bit of effort and learning, your phone photos can certainly become more than just snapshots.
Feel free to DM me on Twitter.
]]>The Samsung Developer Conference (SDC) for 2021 kicked off on October 26, 2021, and we hope you enjoyed the keynote and highlight sessions. As with many events, there is so much information to digest. Fortunately, with the virtual format this year, developers can go back and review the sessions they’ve watched and study how they might take advantage of all the opportunities available with Samsung platforms, SDKs, and services. While there are too many announcements and technologies to cover in one post, here are some moments from the conference that should be interesting to developers.
Samsung Electronics President DJ Koh began the conference, as he’s done in previous years. The Keynote Session included overviews of announcements that were described in more detail during the Highlight Sessions, Tech Talks, and Code Labs.
Samsung Mobile Senior Vice President Daniel Ahn kicked off the announcements at SDC with a keynote session highlighting the integration of Bixby voice control to the SmartThings ecosystem with details from SmartThings Vice President Samantha Fein.
Technical talks with more details of the integration for developers and device manufacturers are also available.
Further, the SmartThings platform team provided technical talks on the new SmartThings Edge platform and SmartThings Build for use in multi-family home environments.
Support for the Matter standard was announced during the highlight session. Samsung Electronics Vice President Jaeyong Jung and SmartThings Vice President Samantha Fein talked about the bright future for home automation with Matter in this highlight session.
The Samsung Newsroom has more information on the Bixby and SmartThings integration as well as the Matter standard support.
Samsung Executive Vice President KC Choi details how Samsung Knox unlocks the $80B enterprise market with devices and services to ensure that critical data is secure and employee information is kept confidential. In the Tech Talk sessions below, Samsung B2B product experts give developers the information they need to integrate their own apps and services for this lucrative market segment.
Developers with solutions for the enterprise market should sign up for the Knox Partner Program.
At SDC21, developers interested in smart TV solutions had plenty to learn. In the keynote, Samsung Eelectronics Senior Vice President Yongjae Kim and Samsung Research Vice President Bill Mandel discussed many new and exciting opportunities for developers with the Tizen platform, which are available for viewers to watch in these sessions.
The experience of the last 18 months has shown that self-care is important to our well-being. By allowing us to disconnect from reality, games help reduce stress and give a temporary reprieve from the stresses of daily life. Samsung support for gamers and game developers comes to the forefront at SDC21 with these important sessions.
In addition to mobile gaming, the announcement of HDR10+ for gaming will delight gamers looking forward to top-quality experiences on smart TVs.
The Samsung Internet browser ships with every Samsung Galaxy phone. The developers and advocates for Samsung Internet want to ensure that consumers have the best possible experiences, mixing web content with mobile hardware.
Samsung Electronics Executive Vice President Janghyun Yoon unveiled the One UI 4 platform, showing numerous examples of beautifully designed cross-device experiences, such as taking a photograph from Galaxy Z Flip3, sharing to the Galaxy Book, and editing using a Galaxy Tab S with S Pen. While beauty is only skin deep, One UI 4 adds layers of security to these experiences. Learn more about One UI 4 in these sessions.
Galaxy Watch4 was introduced at Galaxy Unpacked in August 2021. The One UI Watch platform uses Wear OS powered by Samsung. Developers interested in bringing their ideas to the new platform should check out these talks.
Further, designers who are interested in expressing their creative side with watch face designs should view this session on how to use Watch Face Studio to create beautiful designs without writing code.
Samsung Sr. Developer Evangelist Tony Morelan presents the Best of Galaxy Store Awards, now in their 4th year. These awards are Samsung’s way to express gratitude to those developers and designers who are bringing beautiful and exciting apps, games, and themes to Galaxy Store.
For the second year, the Bixby team recognized the top developer and capsule for their platform.
This site has many resources for developers looking to build for and integrate with Samsung devices and services. Stay in touch with the latest news by creating a free account and subscribing to our monthly newsletter. Visit the Marketing Resources page for information on promoting and distributing your apps through the Galaxy Store. Finally, our Developer Forum is an excellent way to stay up-to-date on all things related to the Galaxy ecosystem.
Thank you for joining us for SDC21 and we look forward to seeing you in 2022.
]]>Galaxy Unpacked brought exciting new product announcements for Galaxy Watch4, Galaxy Z Fold3, and Galaxy Z Flip3. In this post, we’ll go into more detail on how you can create delightful experiences for these devices in your own apps and watch faces.
The new Galaxy Watch4 announcement shines the spotlight on the new collaborative wearable platform from Samsung and Google. This change from Tizen to Wear OS opens doors for developers and designers to create exciting new designs.
As with earlier generations of Galaxy Watch models, designers can create beautiful work without coding. For Galaxy Watch4, we’ve introduced Watch Face Studio. Designers who built for Tizen-based devices using Galaxy Watch Studio will recognize how this new tool works.
For creators with existing designs built with Galaxy Watch Studio, there is also a tool to convert from the Tizen .tpk format to the Wear OS .aab format. This tool is called Galaxy Watch Studio Converter. As there are differences between Tizen and Wear OS, some functions are not supported in the conversion process.
For support on these tools, please visit the Samsung Developer Forums for Watch Face Studio and Watch Face Studio Converter.
Developers who are interested in creating applications for the new wearable platform can find more information at the Android Developers site.
Galaxy Unpacked brought news of two exciting evolutions of Samsung phones with the Galaxy Z Fold3 and the Galaxy Z Flip3. For mobile developers, these devices provide opportunities to bring a first-class experience to users of their apps by utilizing unique features of the Z Series hardware and software.
Possibly the most notable addition to the Galaxy Z Fold3 is the S Pen, which had previously been available to owners of Galaxy Note and Galaxy Tab devices. Developers can integrate the S Pen Remote and Air Actions into their apps. Code Labs with samples are available to get you started.
To check if S Pen is available on a device, this block of code can be used.
SpenRemote spenRemote = SpenRemote.getInstance();
if (!spenRemote.isConnected()) {
spenRemote.connect(getContext(),
new SpenRemote.ConnectionResultCallback() {
@Override
public void onSuccess(SpenUnitManager manager) {
}
@Override
public void onFailure(int error) {
}
});
}
Another feature of the Galaxy Z Fold3 that developers can access is the inner display, activated when the device is opened. Developers who want to provide a premium experience for these devices should ensure their apps provide continuity across the displays while interrupting their users as little as possible. Samsung provides a Code Lab and sample code to ensure that your apps provide the best experience possible.
The Galaxy Z Fold3 and Galaxy Z Flip3 also provide a unique featured called Flex Mode. Flex Mode is activated when a user opens the Z Fold3 or Z Flip3 screen half-way. In this configuration, apps are notified to alter their appearance to give the best experience. As with other new features, a Code Lab with samples is available on this site.
Even though Galaxy Unpacked could not be experienced live, it brought new and exciting product announcements that improve our lives. We hope you enjoyed Galaxy Unpacked and use this information to make incredible experiences for your users.
This site has many resources for developers looking to build for and integrate with Samsung devices and services. Stay in touch with the latest news by creating a free account or by subscribing to our monthly newsletter. Visit the Marketing Resources page for information on promoting and distributing your apps. Finally, our developer forum is an excellent way to stay up-to-date on all things related to the Galaxy ecosystem.
]]>This new deploy method uses Netlify, which receives notifications from GitHub when a repository has changed. On change, it pulls a clone of the repo, builds it, and if successful, it deploys it to their servers. For my use, it’s free, but they have paid tiers for commercial use.
I’m editing this document with the GitHub editor, which is rudimentary, but basic. It’s not as useful as a full-featured editor, like VS Code or Atom, but for a quick note it’s just fine. If you see this post with a lot of extra edits below, then you’ll know it worked…
5 minutes later…
It worked on the first try. The real test will be when I can do this using my Samsung S6 tablet sitting in my 72 MGB. A quick shout out to Michael Currin on the Jekyll Talk forums for convincing me to move into the modern era.
]]>There’s a problem with AWS that has been a problem for years. It affects anyone that wants to deploy a secure static site that’s hosted on Amazon S3. Many people use Github Pages and that’s great, but I already use AWS for my hosting and I’d prefer to do new projects on AWS. If you try to search for solutions to this problem, you’ll get steered toward using Lambdas and that simply isn’t necessary.
To get a S3 site to deploy with HTTPS, you have to create a certificate using the Amazon Credential Manager. I’m not going to get into that because it’s not germane to the problem and there are plenty of tutorials to help you do that.
To get HTTPS to deploy, you must use CloudFront and provide the certificate for your site. You can use AWS ACM certificates and they work fine. After CloudFront deploys to its mirrors, you should be able to browse your site.
So long as you stay on the front page. If you click any link to pages within the site, you’ll run into a CloudFront 404 error.
This is one of those subtle differences between hosting a static site on S3 with HTTP and hosting a static site on S3 on the CloudFront distribution network with HTTPS. CloudFront doesn’t treat HTML documents in sub-folders the same way that S3 does. With S3, the webserver can be configured to find index.html by default in all sub-folders. With CloudFront, only the top-most index.html is served by default. All the sub-folders will not serve up index.html by default.
One solution is to have your static site generate explicit index.html paths into every link. That’s doable and I’ve done it in the past. If you search for the solution using your favorite search engine, you’ll likely come across the “official” solution to use Lambda@Edge. You’ll go down that rabbit hole for a few hours, scratching your head and digging through the seemingly circular references in the AWS docs.
Then, you might discover, as I did, the real solution. I can’t take credit for this, but I hope I can publicize it and allow the real answer to the problem to propagate.
If you look at the comments on this page, you’ll find a comment by someone named ‘ash1’. This is the correct answer…
I found a much simpler solution that doesn’t require a lambda. In your Cloud distribution, for the origin, instead of using the S3 bucket, use the S3 “endpoint url”. That will automatically take care of the subdirectory default index.html issues. Changed all of my distributions, and they are all working now with subdirectory “/” endpoints going to the index.html files within the subdirectory.
- Go to S3, click on your bucket,
- Click on properties tab.
- Click on Static Website Hosting
- At the top of the static website hosting page, it will show you the endpoint url for your S3 bucket. Use that url value for the CF distribution origin instead of the name of your S3 bucket.
To be clear on this, in the “Origin settings” page for your CloudFront distribution, change the “Origin Domain Name” from something like mysite.com.s3.amazonaws.com to mysite.com.s3-website-us-east-1.amazonaws.com.
Once I made this change and allowed CloudFront to re-deploy my site, I was able to click on links to pages within the site.
]]>