HN Academy

The best online courses of Hacker News.

Hacker News Comments on
Introduction to Artificial Intelligence | Udacity Free Courses

Udacity · 14 HN comments

HN Academy has aggregated all Hacker News stories and comments that mention Udacity's "Introduction to Artificial Intelligence | Udacity Free Courses" .
Course Description

Take Udacity's Introduction to Artificial Intelligence course and master the basics of AI. Topics include machine learning, probabilistic reasoning, robotics and more.

HN Academy Rankings
Provider Info
This course is offered on the Udacity platform.
HN Academy may receive a referral commission when you make purchases on sites after clicking through links on this page. Most courses are available for free with the option to purchase a completion certificate.
See also: all Reddit discussions that mention this course at reddacity.com.

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this url.
> we can’t say anything more than "roughly order of magnitude"

This, to me, is the core of it. "10x" is just a shorthand for the compound interest of good decisions. Nobody would be surprised to learn that teams with good managers are 10x more productive, or that factories with a good safety culture have 0.1x the accident rate.

Where it gets confusing is that developers have substantially more scope for compounding than their roles suggest. They're generally not managers, executives, or cultural leaders, and yet their decisions seem to compound anyway. Why is this?

For the answer, I encourage you to watch the YouTube channel "Primitive Technology", where a brave and often shirtless soul spends weeks and months constructing tools and structures from scratch. He does so with a deftness and skill that is captivating, yet the stuff he makes is, by modern standards, totally useless. It's not his fault; it's just that technology compounds, and no amount of individual skill can ever make a skyscraper out of wattle and daub.

Only in software, the most questionable of all the engineerings, do we build the processes and tools we depend on while we're using them. If you showed up to a job site talking about making your own concrete mixer to get the building done faster you'd be laughed right back to wherever you came from. Yes, in cutting-edge applications and prototype manufacturing it's different, but almost every other area of engineering takes stuff that works and uses it to make more stuff that works.

To be clear, this isn't a long-form argument for "we should have stuck with Rails", nor is it "software is just getting started, give it some time". Rather, I believe that software development is essentially and unavoidably compounding. Every design decision, abstraction, function and data structure is a piece of your foundation that you build and then stand upon to build some more. We're creating abstract machines, and when they become concrete enough to rely upon they no longer need software developers.

Which is why it's so ridiculous to imagine this 10^x developer who crushes code 24/7, laughs in the face of process or documentation, and communicates only via a colourful aura of cheeto dust and misanthropy. It's an adolescent fantasy of expertise, no different from Doctor House or Detective Batman. Sophomoric macho bullshit. The 10x isn't the person, it's the compounding effect of good decisions.

Peter Norvig is a great developer, but try airdropping him into some dumpster fire codebase that's 99% finished, riddled with bugs and way behind schedule. Is he going to grab his wizard hat and 10x his way out of it overnight? No. He's going to have to slog through the mess like anyone else. His expertise doesn't result in faster typing, but in better decisions. Decisions that work well now, but enable even better work later.

Most importantly, this kind of compounding doesn't just apply to you, but to the people you work with and the environment you work in. To enable that, you need communication, leadership and generosity. Here's the man himself, describing his work at Google [0]:

> I've varied from having two to two hundred people reporting to me, which means that sometimes I have very clear technical insight for every one of the projects I'm involved with, and sometimes I have a higher-level view, and I have to trust my teams to do the right thing. In those cases, my role is more one of communication and matchmaker--to try to explain which direction the company is going in and how a particular project fits in, and to introduce the project team to the right collaborators, producers, and consumers, but to let the team work out the details of how to reach their goals.

Or some quotes from his essay "Teach yourself programming in ten years" [1]:

> Talk with other programmers; read other programs. This is more important than any book or training course.

> Work on projects with other programmers. Be the best programmer on some projects; be the worst on some others. When you're the best, you get to test your abilities to lead a project, and to inspire others with your vision. When you're the worst, you learn what the masters do, and you learn what they don't like to do

> Work on projects after other programmers. Understand a program written by someone else. See what it takes to understand and fix it when the original programmers are not around. Think about how to design your programs to make it easier for those who will maintain them after you.

Or check out his lavishly documented walkthrough of approaches to the Travelling Salesperson Problem[2] (just one of many similarly educational "pytudes"[3]). Or the leading AI textbook he co-wrote[4]. Or the online AI course he co-developed[5]...

That's what real 10x looks like. Not some myopic ubermensch who divides the world into "code" and "dumb", but a thoughtful decision-maker who treats great work as a garden to grow, rather than a race to win.

[0] https://www.quora.com/What-does-Peter-Norvig-do-exactly-at-G...

[1] http://norvig.com/21-days.html

[2] https://nbviewer.jupyter.org/github/norvig/pytudes/blob/mast...

[3] https://github.com/norvig/pytudes

[4] http://aima.cs.berkeley.edu/

[5] https://www.udacity.com/course/intro-to-artificial-intellige...

Intro to AI

https://www.udacity.com/course/intro-to-artificial-intellige...

Machine Learning

https://www.coursera.org/learn/machine-learning

The Pacman programming exercises in python

http://ai.berkeley.edu/project_overview.html

And the Kaggle Titanic Survivability dataset

https://www.kaggle.com/c/titanic

But if you desire an even gentler intro. Try Daniel Shiffman's Nature of Code in P5

http://natureofcode.com/

best of luck ;)

ML/AI:

* https://www.udacity.com/course/intro-to-artificial-intellige...

* https://www.udacity.com/course/machine-learning--ud262

Deep Learning:

* Jeremy Howard's incredibly practical DL course http://course.fast.ai/

* Andrew Ng's new deep learning specialization (5 courses in total) on Coursera https://www.deeplearning.ai/

* Free online "book" http://neuralnetworksanddeeplearning.com/

* The first official deep learning book by Goodfellow, Bengio, Courville is also available online for free http://www.deeplearningbook.org/

TL;DR - read my post's "tag" and take those courses!

---

As you can see in my "tag" on my post - most of what I have learned came from these courses:

1. AI Class / ML Class (Stanford-sponsored, Fall 2011)

2. Udacity CS373 (2012) - https://www.udacity.com/course/artificial-intelligence-for-r...

3. Udacity Self-Driving Car Engineer Nanodegree (currently taking) - https://www.udacity.com/course/self-driving-car-engineer-nan...

For the first two (AI and ML Class) - these two MOOCs kicked off the founding of Udacity and Coursera (respectively). The classes are also available from each:

Udacity: Intro to AI (What was "AI Class"):

https://www.udacity.com/course/intro-to-artificial-intellige...

Coursera: Machine Learning (What was "ML Class"):

https://www.coursera.org/learn/machine-learning

Now - a few notes: For any of these, you'll want a good understanding of linear algebra (mainly matrices/vectors and the math to manipulate them), stats and probabilities, and to a lessor extent, calculus (basic info on derivatives). Khan Academy or other sources can get you there (I think Coursera and Udacity have courses for these, too - plus there are a ton of other MOOCs plus MITs Open Courseware).

Also - and this is something I haven't noted before - but the terms "Artificial Intelligence" and "Machine Learning" don't necessarily mean the same thing. Based on what I have learned, it seems like artificial intelligence mainly revolves around modern understandings of artificial neural networks and deep learning - and is a subset of machine learning. Machine learning, though, also encompasses standard "algorithmic" learning techniques, like logistic and linear regression.

The reason why neural networks is a subset of ML, is because a trained neural network ultimately implements a form of logistic (categorization, true/false, etc) or linear regression (range) - depending on how the network is set up and trained. The power of a neural network comes from not having to find all of the dependencies (iow, the "function"); instead the network learns them from the data. It ends up being a "black box" algorithm, but it allows the ability to work with datasets that are much larger and more complex than what the algorithmic approaches allow for (that said, the algorithmic approaches are useful, in that they use much less processing power and are easier to understand - no use attempting to drive a tack with a sledgehammer).

With that in mind, the sequence to learn this stuff would probably be:

1. Make sure you understand your basics: Linear Algebra, stats and probabilities, and derivatives

2. Take a course or read a book on basic machine learning techniques (linear regression, logistic regression, gradient descent, etc).

3. Delve into simple artificial neural networks (which may be a part of the machine learning curriculum): understand what feed-forward and back-prop are, how a simple network can learn logic (XOR, AND, etc), how a simple network can answer "yes/no" and/or categorical questions (basic MNIST dataset). Understand how they "learn" the various regression algorithms.

4. Jump into artificial intelligence and deep learning - implement a simple neural network library, learn tensorflow and keras, convolutional networks, and so forth...

Now - regarding self-driving vehicles - they necessarily use all of the above, and more - including more than a bit of "mechanical" techniques: Use OpenCV or another machine vision library to pick out details of the road and other objects - which might then be able to be processed by a deep learning CNN - ex: Have a system that picks out "road sign" object from a camera, then categorizes them to "read" them and use the information to make decisions on how to drive the car (come to a stop, or keep at a set speed). In essence, you've just made a portion of Tesla's vehicle assist system (first project we did in the course I am taking now was to "follow lane lines" - the main ingredient behind "lane assist" technology - used nothing but OpenCV and Python). You'll also likely learn stuff about Kalman filters, pathfinding algos, sensor fusion, SLAM, PID controllers, etc.

I can't really recommend any books to you, given my level of knowledge. I've read more than a few, but most of them would be considered "out of date". One that is still being used in university level courses is this:

http://aima.cs.berkeley.edu/

https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

Note that it is a textbook, with textbook pricing...

Another one that I have heard is good for learning neural networks with is:

https://www.amazon.com/Make-Your-Own-Neural-Network/dp/15308...

There are tons of other resources online - the problem is separating the wheat from the chaff, because some of the stuff is outdated or even considered non-useful. There are many research papers out there that can be bewildering. I would say if you read them, until you know which is what, take them all with a grain of salt - research papers and web-sites alike. There's also the problem of finding diamonds in the rough (for instance, LeNet was created in the 1990s - but that was also in the middle of an AI winter, and some of the stuff written at the time isn't considered as useful today - but LeNet is a foundational work of today's ML/AI practices).

Now - history: You would do yourself good to understand the history of AI and ML, the debates, the arguments, etc. The base foundational work come from McCulloch and Pitts concept of an artificial neuron, and where that led:

https://en.wikipedia.org/wiki/Artificial_neuron

Also - Alan Turing anticipated neural networks of the kind that wasn't seen until much later:

http://www.alanturing.net/turing_archive/pages/reference%20a...

...I don't know if he was aware of McCulloch and Pitts work which came prior, as they were coming at the problem from the physiological side of things; a classic case where inter-disciplinary work might have benefitted all (?).

You might want to also look into the philosophical side of things - theory of mind stuff, and some of the "greats" there (Minsky, Searle, etc); also look into the books written and edited by Douglas Hofstadter:

https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

There's also the "lesser known" or "controversial" historical people:

* Hugo De Garis (CAM-Brain Machine)

* Igor Aleksander

* Donald Michie (MENACE)

...among others. It's interesting - De Garis was a very controversial figure, and most of his work, for whatever it is worth - has kinda been swept under the rug. He built a few computers that were FPGA based hardware neural network machines that used cellular automata a-life to "evolve" neural networks. There were only a handful of these machines made; aesthetically, their designs were as "sexy" as the old Cray computers (seriously).

Donald Michie's MENACE - interestingly enough - was a "learning computer" made of matchboxes and beads. It essentially implemented a simple neural network that learned how to play (and win at) naughts and crosses (TIC-TAC-TOE). All in a physically (by hand) manipulated "machine".

Then there is one guy, who is "reviled" in the old-school AI community on the internet (take a look at some of the old comp.ai newsgroup archives, among others). His nom-de-plume is "Mentifex" and he wrote something called "MIND.Forth" (and translated it to a ton of other languages), that he claimed was a real learning system/program/whatever. His real name is "Arthur T. Murray" - and he is widely considered to be one of the earliest "cranks" on the internet:

http://www.nothingisreal.com/mentifex_faq.html

Heck - just by posting this I might be summoning him here! Seriously - this guy gets around.

Even so - I'm of the opinion that it might be useful for people to know about him, so they don't go to far down his rabbit-hole; at the same time, I have a small feeling that there might be a gem or two hidden inside his system or elsewhere. Maybe not, but I like to keep a somewhat open mind about these kinds of things, and not just dismiss them out of hand (but I still keep in mind the opinions of those more learned and experienced than me).

EDIT: formatting

ep103
Oh man, thank you! Thank you!
> As someone who's interested in taking the Udacity course, would your recommend it?

So far, yes - but that has a few caveats:

See - I have some background prior to this, and I think it biases me a bit. First, I was one of the cohort that took the Stanford-sponsored ML Class (Andrew Ng) and AI Class (Thrun/Norvig), in 2011. While I wasn't able to complete the AI Class (due to personal reasons), I did complete the ML Class.

Both of these courses are now offered by Udacity (AI Class) and Coursera (ML Class):

https://www.udacity.com/course/intro-to-artificial-intellige...

https://www.coursera.org/learn/machine-learning

If you have never done any of this before, I encourage you to look into these courses first. IIRC, they are both free and self-paced online. I honestly found the ML Class to be easier than the AI class when I took them - but that was before the founding of these two MOOC-focused companies, so the content may have changed or been made more understandable since then.

In fact, now that I think about it, I might try taking those courses again myself as a refresher!

After that (and kicking myself for dropping out of the AI Class - but I didn't have a real choice there at the time), in 2012 Udacity started, and because of (reasons...) they couldn't offer the AI Class as a course (while for some reason, Coursera could offer the ML Class - there must have been licensing issues or something) - so instead, they offered their CS373 course in 2012 (at the time, titled "How to Build Your Own Self-Driving Vehicle" or something like that - quite a lofty title):

https://www.udacity.com/course/artificial-intelligence-for-r...

I jumped at it - and completed it as well; I found it to be a great course, and while difficult, it was very enlightening on several fronts (for the first time, it clearly explained to me exactly how a Kalman filter and PID worked!).

So - I have that background, plus everything else I have read before then or since (AI/ML has been a side interest of mine since I was a child - I'm 43 now).

My suggestion if you are just starting would be to take the courses in roughly this order - and only after you are fairly comfortable with both linear algebra concepts (mainly vectors/matrices math - dot product and the like) and stats/probabilities. To a certain extent (and I have found this out with this current Udacity course), having a knowledge of some basic calculus concepts (derivatives mainly) will be of help - but so far, despite that minor handicap, I've been ok without that greater knowledge - but I do intend to learn it:

1. Coursera ML Class 2. Udacity AI Class 3. Udacity CS373 course 4. Udacity Self-Driving Car Engineer Nanodegree

> Do you think the course prepares you enough find a Self-Driving developer job?

I honestly think it will - but I also have over 25 years under my belt as a professional software developer/engineer. Ultimately, it - along with the other courses I took - will (and have) help me in having other tools and ideas to bring to bear on problems. Also - realize that this knowledge can apply to multiple domains - not just vehicles. Marketing, robotics, design - heck, you name it - all will need or do currently need people who understand machine learning techniques.

> Would you learn enough to compete/work along side people who got their Masters/PhD in Machine Learning?

I believe you could, depending on your prior background. That said, don't think that these courses could ever substitute for graduate degree in ML - but I do think they could be a great stepping stone. I am actually planning on looking into getting my BA then Masters (hopefully) in Comp Sci after completing this course. Its something I should have done long ago, but better late than never, I guess! All I currently have is an associates from a tech school (worth almost nothing), and my high school diploma - but that, plus my willingness to constantly learn and stay ahead in my skills has never let me down career-wise! So I think having this ML experience will ultimately be a plus.

Worst-case scenario: I can use what I have learned in the development of a homebrew UGV (unmanned ground vehicle) I've been working at on and off for the past few years (mostly "off" - lol).

> Appreciate your input.

No problem, I hope my thoughts help - if you have other questions, PM me...

These 3 are the most well know and well regarded 0-to-hero type intro courses online, and high-school math is sufficient to follow along (but pick only one and go start to finish!):

* https://www.udacity.com/course/intro-to-artificial-intellige... (by Peter Norvig - director of research @ Google & Sebastian Thrun - lead dev of google self driving car and founder of google x, now at gerogia tech uni) - great if you want a more "deep thinking" style intro to AI

* https://www.udacity.com/course/intro-to-machine-learning--ud... (Sebastian Thrun & Katie Malone - former physicist and data scientist great at explaining stuff so that anyone can grok it) - great if you want a more "down to earth" engineering style intro with simple clear examples

* https://www.coursera.org/learn/machine-learning (Andrew Ng @ Stanford & chied scientist at Baidu, former Google researcher) - great if you want a "bottom up", from math, through code/engineering, with less fuzzy big picture stuff - this is a great intro, even if Andrew Ng is less of a rock-star-presenter, if you want to start from math details up take this one!

Oh, and kaggle: https://www.kaggle.com/ . If you get stuck on anything, google the relevant math, pick up just enough to have an intuition and carry on.

You're still in college so you have plenty of time to learn well the required math, it's better to get a broad picture of the field ASAP imho! Then when you'll take the math classes, you'll already have "aha, this feels my gap about X and Y" and "aha, now I get why Z" and you'll really love that math after you already know what problems it solves!

(PS if you're less of a "highly logico-intuitive" person and more "analytical rigorous thinker" instead, just ignore my last paragraph and focus on the math, but try to get some deep intuition of probability along the way)

CN7R
I'll try to do the coursera course on ML -- last time I tried I got swamped by schoolwork. Thanks for the suggestions.
I'll tell you how I started my journey:

I took the Stanford ML Class in 2011 taught by Andrew Ng; ultimately, Coursera was born from it, and you can still find that class in their offerings:

https://www.coursera.org/learn/machine-learning

On a similar note, Udacity sprung up from the AI Class that ran at the same time (taught by Peter Norvig and Sebastian Thrun); Udacity has since added the class to their lineup (though at the time, they had trouble doing this - and so spawned the CS373 course):

https://www.udacity.com/course/intro-to-artificial-intellige...

https://www.udacity.com/course/artificial-intelligence-for-r...

I took the CS373 course later in 2012 (I had started the AI Class, but had to drop out due to personal issues at the time).

Today I am currently taking Udacity's "Self-Driving Car Engineer" nanodegree program.

But it all started with the ML Class. Prior to that, I had played around with things on my own, but nothing really made a whole lot of sense for me, because I lacked some of the basic insights, which the ML Class course gave to me.

Primarily - and these are key (and if you don't have an idea about them, then you should study them first):

1. Machine learning uses a lot of tools based on and around probabilities and statistics.

2. Machine learning uses a good amount of linear algebra

3. Neural networks use a lot of matrix math (which is why they can be fast and scale - especially with GPUs and other multi-core systems)

4. If you want to go beyond the "black box" aspect of machine learning - brush up on your calculus (mainly derivatives).

That last one is what I am currently struggling with and working through; while the course I am taking currently isn't stressing this part, I want to know more about what is going on "under the hood" so to speak. Right now, we are neck deep into learning TensorFlow (with Python); TensorFlow actually makes things pretty simple to create neural networks, but having the understanding of how forward and back-prop works (because in the ML Class we had to implement this using Octave - we didn't use a library) has been extremely helpful.

Did I find the ML Class difficult? Yeah - I did. I hadn't touched linear algebra in 20+ years when I took the course, and I certainly hadn't any skills in probabilities (so, Kahn Academy and the like to the rescue). Even now, while things are a bit easier, I am still finding certain tasks and such challenging in this nanodegree course. But then, if you aren't challenged, you aren't learning.

Wazzymandias
Thank you for the comprehensive response! I have a lot to learn but it's exciting.
Learning from Data by Caltech Professor Yaser Abu-Mostafa : https://work.caltech.edu/telecourse.html. There was an edX version last year

MMDS: Mining Massive Datasets by Stanford professors Jure Leskovec, Anand Rajaraman,Jeff Ullman, Link: https://www.coursera.org/course/mmds

Neural Networks for Machine Learning: Geoffrey Hinton, Link: https://www.coursera.org/course/neuralnets

Artificial Intelligence for Robotics: Programming a Robotic Car, Sebastian Thrun Link: https://www.udacity.com/course/artificial-intelligence-for-r...

Intro to Artificial Intelligence, Peter Norvig & Sebastian Thrun. This was the one which started it all in 2011, joined a little late by Andrew Ng's ML course which has been mentioned already.

Intro to Artificial Intelligence link: https://www.udacity.com/course/intro-to-artificial-intellige...

alkalait
I enjoyed MMDS, but the lectures given by Ulman were painful to sit through. The man could simply not deliver a single sentence not read verbatim from a screen.
roye
same here, had to watch some of those lectures at 2x speed
Hortinstein
I took Mr. Thrun's class on Programming a Robotic Car last year for part of my OMSCS classes at GA Tech.

They should use classes like this in undergrad computer science to show why Linear Algebra will be so important and the amazing applications you can do with it. Highly recommended.

Buy and work through "Artificial Intelligence: A Modern Approach". It's a huge book and the de facto standard for pretty much every AI 101+ course. Some of the stuff may not interest you some might but it covers a broad range (from logic based agents to Bayesian networks). It's systematic and has excellent references and further reading notes for each chapter. The focus is not on the currently sexy "data science" aspects though (however you will find plenty of material that is relevant).

The edX class from Berkeley is pretty fun and hands on. It uses Pacman as a running example and essentially teaches the agents stuff from AIAMA:

https://www.edx.org/course/artificial-intelligence-uc-berkel...

The Stanford class by Thrun and Norvig himself (one of the authors of AIAMA) is also good but I prefer the edX one:

https://www.udacity.com/course/intro-to-artificial-intellige...

Edit: changed to direct links for the courses

wimagguc
+1 for the Stanford course. Great intro to AI and super easy to follow - I've done it after my uni class elsewhere, and it helped to internalise what I've learned there.
musername
reading the book still requires a significant amount of time
tansey
The AIMA book is sort of a Good Old-Fashioned AI (GOFAI) book that focuses a lot on agents and planning. The jobs this article is talking about are really machine learning ones-- taking large volumes of data and extracting knowledge, so as to build recommender systems and such. For that, Kevin Murphy's book, "Machine Learning: A Probabilistic Approach" is without a doubt the best book out there, both in terms of explaining things from the ground up and being the most comprehensive/up-to-date source.
juliangregorian
Murphy's book is actually subtitled "A Probabilistic Perspective" -- "Machine Learning: A Probabilistic Approach" is a different book by a different author.
kriro
There's still quite a bit of material on Bayesian networks (with the dreadfull dentist example :D), neural networks and support vector machines but overall you're right the focus is on agents. The relevant chapters are great staring points though and as always filled with great reference material for further reading.

+ I'm pretty sure if you apply for an AI job somewhere and it's labaled AI and not "data science" they'll expect that you know the material in AIAMA.

Here's a udacity course: https://www.udacity.com/course/cs271

It looks like it's the same material as the original AI class from 2012 (which started the online course hype). I took that course back then, and found it to give a good introduction.

The pricey but standard recommendation would be buying and working through "Artificial Intelligence - A Modern Approach". It's one of my favourite IT books :)

There's a pretty fun/good course on one of the free MOOCs that pretty much follows the book and uses pacman as an example. It uses Python. It's the course that has a cute robot on the slides :) Edit (found it, this one): https://www.edx.org/course/uc-berkeleyx/uc-berkeleyx-cs188-1...

Edit2: There's also this one but it's not the one I'm thinking of (it's also good though, Norvig is one of the authors of AI-AMA): https://www.udacity.com/course/cs271

Nov 14, 2013 · mbeissinger on Deep Learning 101
Definitely a solid foundation in linear algebra and statistics (mostly Bayesian) are necessary for understanding how the algorithms work. Check out the wiki portals (http://en.wikipedia.org/wiki/Machine_learning) and (http://en.wikipedia.org/wiki/Artificial_intelligence) for overviews of the most common approaches.

Also, Andrew Ng's coursera course on machine learning is amazing (https://www.coursera.org/course/ml) as well as Norvig and Thrun's Udacity course on AI (https://www.udacity.com/course/cs271)

I'm browsed it a bit. I'm hoping they will offer it again.

I've been following the self paced AI class in Udacity https://www.udacity.com/course/cs271

I've done an AI course at Udacity - https://www.udacity.com/course/cs271 - I found it pretty good if you are interested in learning the very basics of AI
scdsharp7
I also enjoyed their 3D Graphics programming course cs291.
HN Academy is an independent project and is not operated by Y Combinator, Coursera, edX, or any of the universities and other institutions providing courses.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.