HN Academy

The best online courses of Hacker News.

Hacker News Comments on
Architecting with Google Kubernetes Engine

Coursera · Google Cloud · 1 HN comments

HN Academy has aggregated all Hacker News stories and comments that mention Coursera's "Architecting with Google Kubernetes Engine" from Google Cloud.
Course Description

Google Compute EngineGoogle App Engine (GAE)Google Cloud PlatformCloud Computing

HN Academy Rankings
Provider Info
This course is offered by Google Cloud on the Coursera platform.
HN Academy may receive a referral commission when you make purchases on sites after clicking through links on this page. Most courses are available for free with the option to purchase a completion certificate.
See also: all Reddit discussions that mention this course at reddsera.com.

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this url.
Kubernetes must probably be one of the most tutorialized pieces of SW out there. You want a "crash course" on the platform of your choice? You got it. You want it for the price of your choice? You got it.

The docs are great [1] and offer even interactive parts [2]. There are a ton of good materials on youtube. There are Coursera and EdX courses, e.g. [3] from Google and [4] from Red Hat. There are a ton of Udemy courses, e.g. [5]. I'm not even going to mention sites like Pluralsight (which now owns CloudGuru and CloudAcademy).

I'm not even mentioning books on advanced Kubernetes and service mesh topics.

I wonder what drives someone to write a 101-level tutorial on K8s. In my mind, there's such an overload of information on the topic that it must get lost among the greats.

[1] https://kubernetes.io/docs/concepts/overview/what-is-kuberne...

[2] https://kubernetes.io/docs/tutorials/kubernetes-basics/creat...

[3] https://www.coursera.org/specializations/architecting-google...

[4] https://www.coursera.org/specializations/cloud-native-develo...

[5] https://www.udemy.com/course/learn-kubernetes/

brikis98
Author here. I tried to answer your question in the first two paragraphs. But to add some context, given the nature of my work, I hear from developers on a nearly daily basis who are struggling to get started with the technologies mentioned in this blog post series, which include not only Kubernetes, but also Docker, AWS, and Terraform. In part, they are struggling because they are too scared to ask for help, and comments like yours only make that worse: you seem to be implying that the materials out there for Kubernetes are so good, that if you don't get it, there must be something wrong with you. And yet, there are thousands of devs who don't get it, so maybe for different people, there are different ways to learn?

In discussions like this, I'm a fan of what Steve Yegge wrote about blogging [1]:

> This is an important thing to keep in mind when you're blogging. Each person in your audience is on a different clock, and all of them are ahead of you in some ways and behind you in others. The point of blogging is that we all agree to share where we're at, and not poke fun at people who seem to be behind us, because they may know other things that we won't truly understand for years, if ever.

That's why I write: to share what I know, from my particular perspective. Hopefully, that's useful to some people out there. If it's not useful to you, no problem!

And for the record, I agree the Kubernetes docs are great, including those interactive tutorials: if you read the series, you'd see I actually recommend those exact docs at the end of the post [2].

[1] https://sites.google.com/site/steveyegge2/you-should-write-b... [2] https://blog.gruntwork.io/a-crash-course-on-kubernetes-a96c3...

waynesonfire
I just don't want to do that, no thanks. I've spent so much time learning about how an OS can solve these problems and these lessons have paid dividends many times over my career. Instead, I will continue to invest in the core foundation. I'm not interested in re-learning these abstractions reading [1]-[5] tutorials plus many more and studying an unrefined, new layer of complicated abstractions built on top of the OS--the OS is difficult enough. I'll only _use_ k8s when it's managed and supported by a team of 10+ engineers, which is what it requires. Plus, it's not just k8s; you have concourse, spinnaker, artifactory, some sort of cluster templating, kops? to template your k8s deployments across DCs and environments, (I don't know what the community uses, we built our own in-house tool). It's all so gross.

My rejection of these systems has lead me to invest in FreeBSD. There was a learning curve and I'm certainly not as fluent in this system as I am with Linux but I'm in a place where I'm in control of the OS and have at my disposal solid, refined tools to help me masterfully construct the infrastructure to solve my problems. And, every time I solve a problem in this space my foundational understanding grows and these lessons will pay dividends into the future, or so I hope.

I'm utilizing FreeBSD because docker / k8s and I'll add systemd to the list, are missing and thus the community solves problems in a different way that better aligns with my values. FreeBSD isn't easier, I've been stumped on problems countless numbers of times but when I find a solution, it scales. What I mean by that is I have a better understanding of a system that can solve a larger class of problems than if I was in the docker / k8s ecosystem.

YMMV.

BossingAround
That sounds like a bit of an extreme reaction to what seems to be a "this technology is yucky and I don't like it" problem.
intelVISA
They're not wrong tbf, you can largely replace K8s with curl and some basic cloud CLI scripts for the majority of cases. Slight hyperbole but I'll never really understand the weird urge to use Borg-lite everywhere.
waynesonfire
No, you're way off. Maybe my reaction is extreme, but my problem isn't. What's yucky is that you're trying to convince people that reading five tutorials on k8s will make them productive. No, the problem, and I'll repeat it since you conveniently skipped over it, k8s requires 10+ engineers to maintain and manage, full time.

_I_ don't have 10 engineers and thus it's the wrong tool for me and I've decided to not invest in it.

bavell
K8s is certainly a beast and I agree with your posts directionally but just as a counterpoint - as a solopreneur I've been using k8s to run my business workloads (on GKE) for the past 4 or 5 years now. I'm very comfortable with dev and ops and k8s is a force multiplier for me, letting me easily manage much more than I could without it (e.g. due to automation, tooling, community, etc). Building on top of a standardized platform with a huge community has been a big win for me as a solo dev.

I'm not running a very big operation, I only have two nodes which host a few custom webapps and a few dozen WP sites. Running in a single region removes the extra charge for HA GKE, letting me run pretty lean and just pay for the VM, storage and bandwidth. I hardly ever have to spend any time on managing the cluster, it keeps chugging along while I get things done and makes it easy for me to manage app lifecycles. YMMV.

I keep it simple, I tried helm but didn't like it because it added too much complexity. I pull in cert-manager and nginx-ingress to every cluster I run but nothing else. I build my images locally and push to the registry directly, no CI/CD. I focus on the core competencies of k8s and try to stay lean and conservative with adding new tools or components.

bzmrgonz
This is a very interesting use-case for k8s. Can you expand on it some more? For instance, why did you you pick this over vhosting? You essentially took TESLA's Electric semi-truck, remove the cab, extend the chasis and slap a school bus body on top of it. I think I like it, but I'm wondering how stable it is. how much of set-it-and-forget-it benefit do you get out of it. Sometimes it's best to over-engineer to sleep good at night, knowing the limits will never be tested. This is why XCP-NG beats PROXMOX, because there's less wet nursing involved. Let me know if you have any write-up on this approach that you can share. I'd be interested in reading it.
bavell
I'll consider writing a blog post on it but long story short, it's very stable and hands-off when using a managed service. I started off with some DO droplets when I first launched my business and adopted k8s around v1.6. It was a little rough back then but now I spend maybe a few hours a month max on managing my clusters, mostly just upgrading the nodes and core components (cert-manager, nginx-ingress-controller).

I fell in love with it because of the dev and ops experience. Just the shift of perspective from pets to cattle is a big improvement. Deploying via sftp is simple but k8s really helps me with the other aspects (monitoring, logging, scheduled tasks, TLS cert mgmt, blue/green deploys, multiple environments, etc) of managing apps that my clients use regularly and depend on. It's nice having one API platform that can handle all of this and feels more like assembling Lego blocks instead of bespoke solutions for every project.

Hope that gives you some insight!

TuringNYC
>> I wonder what drives someone to write a 101-level tutorial on K8s.

When I was going thru k8s tutorials several years ago, one issue was how quickly the tutorials became outdated with changes in k8s. Then you'd spend only half your effort on the the tutorial, and the other half debugging why the tutorial wasnt working.

throwaway894345
Sadly last I checked there were relatively few tutorials for setting up on bare metal (or rather, they neglect important topics like setting up net worked volumes or other things that are very likely to be required by an application.
starkd
This is no doubt a useful commment, since you provide other sources. However, it's not particularly a mark of courtesy to denigrate others' contribution to the space. If you had substantive criticism of the OP offered tutorial, it might be helpful. However, you offer none.
BossingAround
I can see your point. Thanks for your comment, I'll try to restate my point next time.
rawoke083600
To be fair. Just last night i was googling kubernetes and k3s tutorials.

All of them on page 1 of SERPS are super simple and brilliant and ONLY show you how to add nodes to clusters.

Ok now what ?? Ive drank the k-Koolaid signup for the newsletters but how do i get value from this 'thing' ???

When i code a webapp how do setup/config the db ? How do I do a regular LAMP app ?

Lol i never thought i say this but i need a kubernetes tutorial on 'best practices' for common patterns. No for google scale but how do i kubernetify my run of the mil lamp app with 100kpage views a month ??

There seems to be a gap between the beginner and advance (kubernetes all things) levels ?

Maybe that is the answer ?? The fact that the middle-situations(lamp apps with low-med traffic) tutorials dont exists means i dont need it ??

ericbarrett
> Maybe that is the answer ?? The fact that the middle-situations(lamp apps with low-med traffic) tutorials dont exists means i dont need it ??

SRE for 15 years with FAANG (MAGMA?) scale experience; I would argue this, yes. That's a little over 2 requests per minute. If you're cloud hosting then you could get away with single tiny hosts for your front end and DB, like t3.small on AWS, and Cloudwatch alarms for monitoring. If you need extreme HA or burstability even with this low rate, a managed load balancer (e.g. AWS ALB) with a few target webservers will do the trick and let you swap them out as needed without taking down the site. A DB read replica will give you redundancy there as well. This is all 20-year-old tried and true tech. You'll set it up once and it will run for years without trouble.

Once you introduce Kubernetes you've got a whole 'nother beast to feed, especially if it's self-managed, and you'll pay dearly if it's not (like EKS). For your scale, it would be kind of like buying a full sized semi truck to haul stuff for a corner store. You'd be better served with a small pickup or minivan.

Of course you don't need an "at scale" excuse to learn new tech. In which case I agree there's a dearth of practical tutorials that aren't just "here's a Helm chart." Part of this is because managing a stateful service like MySQL on K8s is not straightforward; there's a lot of ways to do it, most of them are wrong, new ways get introduced every few years, and even people who claim they've solved the issue are probably sitting on a time bomb of their own making and have just gotten lucky.

elbigbad
Our platform team got into k8s for some reason at my company. Because it’s enterprise there’s never really the “millions of requests per second” sort of problems because every deployment is single tenant. We have experienced k8s engineers who I guess we’re just using the tools we know, but wow it’s been such a hassle and has caused so much toil. Right now we’re having trouble scaling back ends for parallel jobs with k8s, which is a problem we easily solved with regular tooling in the past pre-k8s.
cjalmeida
On enterprises, k8s solves a different kind of scaling problem. You tend to have hundreds of apps that are business critical and need to be managed.

K8s provides a consistent way to package, deploy with HA and monitor those apps. Ideally it replaced tons of ad-hoc scripts and confluence pages that are very poorly maintained.

ancieque
I would recommend Bret Fisher tutorials https://www.bretfisher.com/
cloudfive
(disclaimer: I work for a Bunnyshell) Honestly the most benefit would be gained from making Kubernetes invisible and speeding up your development process. Tools exist now to deploy short lived preview environments into your cluster for every PR. This is where the Kubernetes values sits. The whole “shift left” idea. Test before merge / identical short lived environments, etc.
MuffinFlavored
> SERPS

Search Engine Result Pages

vbezhenar
All you need exists on kubernetes.io.

Start with https://kubernetes.io/docs/tutorials/hello-minikube/ and proceed.

Read reference documentation on the same site whenever you need to dig somewhere.

It's awesome. You don't need any other websites. I was able to build a kubernetes cluster and right now deploying multi-service application and I have had enough technical information on this website alone.

As to answer your questions:

You can start with DB as an ordinary container in a statefulset deployment. It's similar to docker. You configure it with environment variables. Advanced approach is operators, but you don't need those for simple start.

Connection to DB is the same like docker. You use Kubernetes secrets instead of docker secrets and that's about it.

My message assumes that you're proficient with docker. If you're not, I suggest to first learning docker. May be you don't need Kubernetes for your scale at all. And if you do, most docker concepts make sense in Kubernetes anyway.

timhaak
I found a similar problem.

Most examples failed when then trying to use them together.

Also, jumping into directly K8 can be quite a jump.

I put this together to help SA Php group.

Starts off with just deploying directly on a server.

Then takes you to a full application deployed on K8 with auto SSL and DNS generation.

It needs a bit of a refresh :(

But finally coming out of being a bit over-committed, so should be updating in the next week or two.

Still some bits missing but it should cover all your basics

https://github.com/haakco/deploying-laravel-app

rawoke083600
Cool thanks will have a look.

Id be more interesting on where kubernetes (redundancy) fits into traditional redundancy ?

Examples:

1. Do i still need gluster/ceph or do i use the longhorn thing ?

2. Db replication? Do i use the usual solutions of master-slaves and clusters or does multiple k-nodes take that over ?

3. Webserver LB with failover ? Do i use LB from hosting vendor, haproxy or does kubernetes have its own thing ??

From what I can tell as a kubernetes-noob the value is: 1) Reproducibility 2) Reliability via redundancy 3) AutoScaling.

All of the abkve has to some degree a previous/current solution , so which do i give up/replace with kubernetes-tool ?

Sorry yes Im a k-noob

vbezhenar
I'll try to answer, but keep in mind that I'm newbie myself.

> 1. Do i still need gluster/ceph or do i use the longhorn thing ?

Kubernetes does not care about storage implementation. It contains some abstract ways to request a storage (PersistentVolumeClaim). And then some particular Kubernetes installation will fulfil this request with PersistentVolume. So basically it comes down to your Kubernetes provider. It should have some instructions about volume classes that you can use.

If you're installing your Kubernetes on bare metal, you need to think about this aspect yourself, of course. Both ceph and glusterfs are popular options and there're good Kubernetes drivers for it. You can also just use local storage, like docker does, but of course it'll not survive server outage, so it limits your availability.

I'm installing my cluster on OpenStack. There's Cinder CSI plugin for Kubernetes, so it provides me storage when I ask for it. My provider uses SAN for one type of volume and Ceph for other type of volume.

I think that simplest solution is some kind of NFS server. Kubernetes can consume it as well.

> 2. Db replication? Do i use the usual solutions of master-slaves and clusters or does multiple k-nodes take that over ?

Basically Kubernetes does not care about your particular configuration. It runs containers and provides those containers with storage, DNS, network, etc. So it's up to you. You can configure database replication with your own tools and scripts if you like. I have had good experience with CloudNativePG. It's so-called Kubernetes Operator. Basically it's a thing that configures database clusters for (postgres) given abstract definition. It can configure master-slaves cluster and it allows for easy backup configuration to S3 storage. There're other operators as well, for almost any popular database. So probably it's better to use those, unless you're very good at database operations.

And in the true clouds it might be a good idea to use managed database and not to think about it at all.

> 3. Webserver LB with failover ? Do i use LB from hosting vendor, haproxy or does kubernetes have its own thing ??

You need some kind of external load balancer to deliver packets in a high-available way. My hoster provides that as part of its OpenStack package. I guess that every cloud hoster provides it. If you're using bare metal, you need some kind of haproxy and keepalived setup (or some kind of hardware load balancer, I have no idea).

This external load balancer have to deliver TCP packets to your worker nodes. Like 1.2.3.4:80 -> 10.1.1.1:30080, 10.11.1.2:30080, 10.11.1.3:30080. And once Kubernetes receives those packets in a high-available way, it routes them as needed. Usually you have ingress controller which provides HTTPS, and then uses HTTP host and path to route request to some pod which serves it in the end. Once request reached Kubernetes, it'll make sure to route it the right way. If your pods deployed in 2+ replicas, it'll be high-available. If your pod deployed in 1 replica and server is died, Kubernetes eventually will reschedule that pod to another server, but there'll be service interruption for a few minutes. So everything high-available should be deployed in 2+ replicas.

> From what I can tell as a kubernetes-noob the value is: 1) Reproducibility 2) Reliability via redundancy 3) AutoScaling.

Here's my take on Kubernetes value.

First is it introduces a language connecting developers and operations. It is important. You don't need developers to hand-waive which ports they need to expose, which services they need to consume and which HTTP routes they need to receive for that particular server. They've got language to express how their service should be used.

Second is it provides high-available cluster. And in my testing it's quite stable. My only issue is that it takes too much time to reschedule pods from dead server. I expected it to take few seconds, but it took few minutes. I don't yet know why is that.

Third and one I didn't really expect when started to learn it: it provides high-quality solutions for some hard problems. I mentioned database clustering and database backup. I can deploy database with single 50-lines YAML mostly copy&pasted from the example. It'll start master and slave pods and it'll provide continuous backup to S3 using barman. I don't have skills to configure that kind of setup and I expect that I'd need a week at least to come to that setup. Another problem is letsencrypt. Well, it's not that hard, but I've spent many hours debugging some convoluted nginx/caddy/whatever net of docker containers trying to figure out what letsencrypt does not work there. With Kubernetes cert-manager it's just works. All configuration is centralized, all services write ingress and they've got their TLS certificate automagically, whether it's HTTP-01 solver, DNS-01 solver, it's just abstracted away.

It has some steep learning curve, that's for sure. Even more so, if you want to deploy it yourself rather than using managed one. I suggest to use managed one if you can. I have some circumstances which prevent me to using managed Kubernetes, but I plan to migrate to managed as soon as I can. It's not that hard, but it takes time and managed Kubernetes is cheap enough. If you can't use managed Kubernetes, try to find a provider with OpenStack API. It'll help with load balancers and storage provisioning.

Autoscaling - that part I didn't solve yet. It's not easy if you're not using managed Kubernetes. But if you're using managed Kubernetes, it should be as easy as ticking a checkbox somewhere.

> All of the above has to some degree a previous/current solution , so which do i give up/replace with kubernetes-tool ?

Well, right now we're using three dedicated servers with docker-composes scattered all over, made with ad-hoc scripts and whatnot, partially working backups, no observability. Chaos. Kubernetes for us looks like a very promising way to throw away that chaos and rebuild operations correctly.

thunky
> I think that simplest solution is some kind of NFS server. Kubernetes can consume it as well.

Isn't this asking for trouble, running MySQL on top of NFS?

Also a k-nube.

vbezhenar
I don't have experience with that kind of setup, so can't really comment. Kubernetes is not magic, if you're running MySQL in NFS volume, it's just mysql process running in separate cgroup with mounted NFS volume. If you think that's a bad idea, don't do that. When it comes to processes and mounts, Kubernetes just arranges things and rest is done my ordinary Linux APIs. I tried to run Postgres with ceph volume and it works for me. You can configure database to just use local storage. Of course if your server will die, your database will die, so probably you will want some kind of cluster setup or be ready to restore database from backup if that happens. Or you can start with simple setup and migrate to something more complex in the future.
thunky
I hadn't heard about k8s "local storage". IIUC this fixes each pod to a single node, so the potential benefits of k8s can't be fully realized. It seems that clustering could still be achieved by running muliple instances across multiple nodes, each with local storage. I might still prefer this to having to manage NFS and any difficulties that might come from using MySQL with remote disks.

It just seems like k8s is coercing people to do things that they normally wouldn't do: MySQL on NFS, Postgres on Ceph, etc. In this case, OP just wants to "kubernetify my run of the mil lamp app" and is now being pointed towards NFS, Ceph, etc. I can't help but wonder if the juice is worth the sqeeze. Especially for services like databases that for most people probably don't really require dynamic provisioning or orchestration.

timhaak
1 It depends. For larger scale gluster or ceph. But quiet a bit more work.

Longhorn you can get up and working quickly.

Though if you are on a cloud provider just use their storage system.

2 K8 doesn't magically solve replication unfortunately.

Though there are helm charts that will automatically set up a replicated setup for you.

I still need to solve backups.

Once again if you are on a cloud provide. Just use their Db offering.

3 K8 doesn't have a default out of the box.

The repo shows you how to setup traefik to handle this.

On cloud providers they have normally integrated it with their lb already.

For me the large advantages are reproducibility and no vendor lock in.

Also give redundancy and quiet a bit of automation once set up.

Auto scaling is always tricky.

Lastly if you have the skills it can be far cheaper to run your own in metal.

If you don't the the time most likely would be better spent actually coding.

Depending where you are in the world and the relevant pay scales.

HN Academy is an independent project and is not operated by Y Combinator, Coursera, edX, or any of the universities and other institutions providing courses.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.