Tuesday, December 10, 2013

Scientific Computing: Computational Chemistry

Source: http://t2.gstatic.com/images?q=tbn:ANd9GcQ9WyMat23OKLRJW5NIVcJkSWcizsyjUbfKtxjc1hMrOB9nuFx8

For me, anything computational is unexplored frontier. I've never studied computational physics, chemistry, or even science for that matter. The algorithms and models scare the heck out of me, but I guess this blog post will help me with my understanding and stare computational science right in the figurative eye.

Computational chemistry is the science of running computer simulations to help solve chemical problems. So rather than mixing chemical A with chemical B to see what happens, you could theoretically use the computer simulation and get the exact same result. Where the mind boggling part comes in is that a computational chemist must have a vast knowledge of both chemistry and computer science. Finals are right around the corner and thinking about more schooling makes me shudder.

Source: http://lsdm.campusgrid.unina.it/files/pub.jpg

The holy grail for all computational chemists is to create a formula or program that can simulate any and all chemical reactions and behaviors. There are two branches of computational chemistry. First is molecular mechanics and dynamics base which is based on classic mechanics and incorporating parameters from different experiments or theoretical methods. The second branch is called computational quantum chemistry which, to my knowledge, uses quantum mechanics to model individual atoms and molecules.

One of the main benefits of using computational chemistry versus doing lab work is reducing the cost. Chemicals aren't cheap and using computers to do it rather than actual lab equipment saves costs, plain and simple. It also helps with the safety of chemists when dealing with volatile or toxic chemicals. One of the great benefits of computational chemistry is that if the formulas and code are correct, you get a very accurate result with no varying factors such as contamination, wavering temperatures, etc.


All in all computational chemistry is actually a very interesting subject. The ability to test everything on a computer would be an extraordinary tool for chemists to use and would therefore further chemists knowledge in the field.

Sources: 

Thursday, December 5, 2013

Computer Graphics: Pixar Animation Studios

Source: http://www.geeksofdoom.com/GoD/img/2011/08/2011-08-20-pixar.jpg

As a kid, my favorite movie was Pixar's, Toy Story. I already believed that my toys were alive and talking so it was amazing seeing it on screen before my very eyes. Little did I know, Toy Story was a huge milestone for Pixar and computer-generated imagery (CGI) technology.

Pixar has won numerous awards for the CGI effects and films they've made. Remember the Pixar movie intro's that has a lamp, bouncing up and down on the "P" of their logo? Well that was actually a character from a short film that launched Pixar's career. It's called "Luxo Jr."; it was the first 3D computer animated film to be nominated for the Best Animated Short Film Oscar. It was praised for the lighting rendering and realism, such as "limbs" and emotions. The CGI lighting effects were out of this world, able to cast shadows on different objects as well as the object shining the light. That was unheard of technology at the time!

Next was my favorite, Toy Story. Released in 1995, Toy Story was the world's first computer animated feature film. It's was nominated for 3 Academy Awards and producer, John Lasseter, won the Special Achievement Academy Award for his animation techniques that made the film possible. The fact that Toy Story could even be made was incredible. Dozens of character models, sets, and types were huge accomplishments for the film since most animations were in a fixed location with one or two characters. Toy Story also was able to shorten render times in half. To do this, they applied different logic based algorithms that identified which portions of the image would need re-rendering in each frame. For example, if Buzz Lightyear was in front of Andy's bed, the bed would be rendered only once and then retained for the next frames to be developed.

A video of the very first 3D rendered movie that was developed by Ed Catmull, the founder of Pixar, can be viewed here: http://vimeo.com/16292363
This video might not seem very impressive by today's standard but man oh man, back in the day this was a huge leap in computer graphics and animation.

Communications and Security: YouTube

Source: http://www.rudyhuyn.com/blog/wp-content/uploads/2013/05/YouTube1.png



My focus for this post will be about communications and networking, primarily, YouTube. YouTube is a video streaming site that started in 2005. In the beginning, YouTube was a rather "dumb" streaming service that would simply send the video data in one big chunk, so there was no detection of slow internet speeds or error correction of lost data. You would essentially just download the whole video and play it.

Now however, YouTube's goal is to be able to give the user a seamless viewing experience by never having the rotating wheel, meaning that the video is buffering, appear. In order to do that, YouTube now sends the video in pieces, allowing the stream to decide during its transmission, "Hey, the video isn't able to keep due to the internet", or "Hey, the video is transferring very fast, I can up the quality", and so on.

When you upload a video to YouTube, it actually breaks those files up and creates different resolutions of it. I was shocked by this because that's a lot of data to store! If you send in a 1080p resolution video, it now has to store a 720p, 480p, 360p, and 240p versions of it too. On top of that, they also store different file formats of it so that it can be used on many different platforms: MP4, 3GPP, WMV, etc. In each of those versions, they then chop it up even more, as mentioned earlier, into a couple seconds long. This is how they are able to gather all the different factors that can make your stream out of whack, analyze it, and then send out the correct quality video to your bandwidth capabilities.

Here's a great video that goes a little more in depth on how YouTube works: http://www.youtube.com/watch?v=OqQk7kLuaK4


If you've read past articles of mine, you'd recognize this channel. I really enjoy the content they put out since it's all related to computers so if you like the video, you should subscribe to them!

Artificial Intelligence: Watson



When someone says artificial intelligence what comes to mind? Terminator? Skynet? World destruction? Well that hasn't happened (yet) even though artificial intelligence has existed since the 1940's. A handful of scientists came together to discuss the possibility of an artificial brain, capable of calculations, decisions, etc. Well 70 years later, we have a somewhat intelligent, but in no means crude, computer simply called, Watson.

Watson is an artificially intelligent (AI) computer system that can answer elaborate questions. IBM developed Watson to show that AI can exist and is on the right track. Their main focus of Watson was to appear on the popular quiz show "Jeopardy!", in which it beat two previous winners of the show.

What's powering Watson? A cluster of 90 IBM power 70 servers, each of them containing 3.5 POWER7 GHz eight core processor, with four threads in each processor. That amounts to a whopping 2,880 POWER7 cores capable of processing 500 gigabytes per second. It also had 16 terabytes of RAM. That’s 16,000 gigabytes!

Watson wasn't allowed to access the internet during the show so instead had to store all 21.6 terabytes of its data. Now that might not sound implausible, but that was strictly text, which uses very little data. So when a question was asked, Watson was fed in the text of the question and had to analyze it, generate a hypothesis, find and score evidence, and then output what it thought, all within seconds. This is why the computing power of Watson was needed to search through the huge catalog of text that was stored.


Watson is a big step for the AI community withits capabilities. The hardware and software used behind it is staggering! I hope this post gives you an idea of how much time and money, those processors aren't cheap, is needed to develop AI.

Sunday, November 17, 2013

History of Computer Science: Microsoft's Xbox

Source: http://compass.xbox.com/assets/fe/41/fe413ea6-6418-4dba-b28a-3bf299d17fe1.png?n=hero-xbox-one-innovate-955x610.png


Seeing as how the next generation of gaming consoles are on their way out, the Playstation 4 and Xbox One, I thought it would be appropriate to write about the history of the Xbox! I've owned every Xbox console that's been released so which is why I'm focusing on this rather than the Playstation.

Microsoft first announced that they've started developing its own gaming console back in 1998 to compete with the already popular Playstation 2. Some of the huge factors that gave the Xbox an edge over the Playstation 2, was that it had double the processing power as well as the very popular third person shooter, Halo: Combat Evolved. On November 15th, 2001, the Xbox began sales and over the next three weeks, they sold 1 million consoles. Now by today's standards, 1 million is chump change, but back then that was a huge deal! They then launched the online gaming service, Xbox Live, in 2002. It had to compete with the Playstation 2's online service, which was free while Xbox Live was a paid subscription. Ultimately, the Xbox won due to its better servers and features.

Next came the Xbox 360, which was released on November 2005. That's a full year ahead of when the Playstation 3 would arrive. The console sold out completely in all regions, except for Japan. As of June 2013, Microsoft has sold 78.2 million Xbox 360 consoles worldwide. In 2009, Microsoft announced a new toy for the Xbox 360, the Kinect. It was a revolutionary device which offered a motion control system to immerse players within their games. A year later, Microsoft announced a complete revamp of the Xbox 360's look, but with identical hardware.

After eight years of waiting for the next generation console, it has finally come. The Xbox One begins selling on November 22nd, 2013, currently priced at $499. It provides a huge boost in processing power, with games running at 60 FPS (Frames Per Second) in 1080p format. It also comes packaged with a revamped Kinect and controller.


Words can't even begin to describe how excited I am to pick one up and see what it has to offer! None of Microsoft's consoles have disappointed me, I've never got a Red Ring of Death (http://editorials.teamxbox.com/xbox/1651/The-Red-Ring-of-Death/p1/).

Sunday, November 10, 2013

File Sharing: The Next Big Tool For Any Project

Source: http://www.qnap.com/images/features/File_Sharing.png
File sharing is one of the biggest tools a team can utilize. It opens lines of communication and offers ways to collaborate work with each other. Whether it be developing software or writing a group paper, any project can use file sharing to their advantage. 

One of the biggest tools for file sharing is Google documents. You can share word, powerpoint, excel, any documents really, and have your teammates edit and resubmit them. You can even look at a log to see who has contributed to the document and when they did it. I haven't realized how great of a tool this was up until taking this course. The collaborative power you gain is so powerful.

As a computer science major, I've always been looking for a way to safely and securely share my codes with other team members. Google now offers a great way to share your projects of Google code. You can submit your own files and share it with the world, or people that you invite. This is another tool that I've learned to utilize and take advantage of. Take a look around at their site and see if it can help you.

Any team can benefit from file sharing programs and Google offers a great way to centralize and collaborate your team's work. Any programmer will know that working as a team is the way too getting projects done quickly and efficiently and file sharing is the next frontier to help you accomplish that.



Wednesday, November 6, 2013

Data Structures: How to Strengthen Your Knowledge

Source: https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCziehDm8fCTyy5rcmfWphOSXqqSoUYfmIk0xHRxlCEDKRksXZKgJyNyt-pMGagc2YDuK019Rn5Kr_XNWArHuPaS8UqnvC9oAjXIGbIS4GUyQ26OebP5o915xv6WNX533ctHbCJ-a_nCw/s1600/DATA+Structures.png


The key to a solid foundation in data structures and algorithms is not memorizing every detail of each structure and its forms, and not to memorize each Big-O time. This knowledge is beneficial, don’t get me wrong, but in order to truly grasp the concept of each structure and algorithm, we must start from the basics.

One of the most important concepts to learn, visualize the data structure. Understand what the data structure looks like, what it should accomplish, and how it is used. This is an extremely important task that you can do, from basic stacks and queues to the most difficult self-balancing binary trees, visualization is key to understanding how the structure should work. I find that it helps to draw it out and map it step by step in order to fully understand it.

Next is to know when and how you can use different data structures for certain applications. It's difficult to know when you would need to use a heap instead of a tree or even a hash function so fully understanding what each one does is essential. Practice makes perfect with this tip. Grab a book on data structures and find some sample problems and just start knocking them out. It'll start becoming second nature after a couple of problems.


Companies that ask technical questions during their interviews will definitely be asking about data structures and how well you know them and to apply that knowledge in some sample problems. Being able to understand and visualize complex data structures will definitely put you ahead of the game. I hope these two tips helped you out! Good luck!
Source: http://img.wonderhowto.com/img/15/10/63474989383032/0/coding-fundamentals-introduction-data-structures-and-algorithms.w654.jpg