Introduction to quantum computing

Let’s start with the Wikipedia definition:

A quantum computer is any device for computation that makes direct use of distinctively quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. In a classical (or conventional) computer, information is stored as bits; in a quantum computer, it is stored as qubits (quantum bits). The basic principle of quantum computation is that the quantum properties can be used to represent and structure data, and that quantum mechanisms can be devised and built to perform operations with this data.

What is a superposition? What is entanglement? Can quantum computers compute NP problems in P? It’s hard get started in this strange field.

Scott AaronsonFirst the basics: Quantum Computing for High School Students by Scott Aaronson is an easy to read introduction to quantum computing in general.

You’ve heard quantum computers can break RSA? Theoretically you’re right. Using Shors Algorithm they can. I won’t explain this here, because Scott Aaronson again has written a beautiful and readable explanation.

Those two links should give you a good start. If not, just read on through Scotts blog.

What you should know about quantum computers: They are not a magic weapon to make computers faster. They can’t compute all NP problems in polynomial time. They are no silver bullet. Quantum computers can solve specific problems due to the quantum phenomenons. Some problems (like breaking RSA) can be reduced to these specific problems.

Currently scientists can build small quantum computers consisting of 7 qubits and have successfully factorized 15 into 5 and 3 in 2001. It’s an engineering problem to scale up the qubits to factorize bigger numbers.

My prediction: It’ll take some time, but eventually (probably decades) we will build quantum computers and scale them up to thousands of qubits. They won’t replace traditional CPUs though, but will work as a coprocessor like Cell.

The scary sidenote for a good discussion (recommended to combine with some drinks): What will happen with the world, once we break RSA? Public key encryption won’t work anymore. No online banking, no e-commerce, no cheap and secure communication anymore. Economy breaks down and world war III wipes out all humans?

Published in: on September 20, 2007 at 12:10 pm  Comments (2)  

Don Knuth on progress in computer science

This blog led me to a premiere: My first letter written with TeX. Guess whom I sent this letter? Don Knuth, inventor of TeX and author of The Art of Computer Programming. His answer looks like this:

Don Knuths letter

He corrected my use of the quotes and then goes on to reply to my question “what are the most important problems in computer science?“, inspired by a talk of Richard Hamming.

Don KnuthI admire Dick Hamming enormously, but I disagree that his first question is “good”. Everybody knows famous, unsolved, “big” problems, which tend to be thought important because of their fame. And perhaps those problems are indeed important … although when they are finally resolved (like the question of deciding equivalence of deterministic languages) I find to my surprise that I don’t get very excited by the result, rather by the method used to get there

I firmly believe that computer science advances by thousands of people solving small problems, which go together and create a massive edifice. Every year that goes by, hardly anything is done that appears to be a milestone worthy of mass attention; yet after five or ten years pass, the whole field has changed significantly. So I’m no fan of “top ten” problem lists.

Let’s hear it for the people who work on and solve small problems based on their judgement of peer pressure. Like for example Hamming.

I agree that the progress in year seems insignificant, but I don’t want to agree that we can’t see where we’re heading with computer science. This blog is my try to get an overview, to get a form stand on the shoulders of the giants and see a littler further.

Published in: on September 18, 2007 at 9:09 am  Comments (14)  

Read also: xkcd

Eiffel tower, ParisI’m currently on a trip to Paris and won’t write anything for some days. My recommendation until my next blog entry here is: xkcd – a webcomic of romance, sarcasm, math, and language.

The drawing is simple, but the humor is great. You’ll need some geekyness to understand the jokes.

Hint: There is always an additional text hidden in the title attribute of the img tag. Your browser probably pops up a text box, if you keep the mouse on the comic for a second.

Published in: on September 13, 2007 at 10:10 am  Leave a Comment  

9 Tips how to give a technical presentation

Everybody can give a good presentation, if she is willing to invest enough time. Here are tips for giving technical presentations.

This means it’s about cold, hard facts. Most of these talks are bad and boooring. A good presentation is hard work, no trick.

1. Buy a book about rhetoric

It is not enough to read one article to give an interesting presentation. Who wants to be a good speaker has to look into speaking.

You could start with the classical The Quick and Easy Way to Effective Speaking.

2. Content is king

You need content. If you don’t have anything to say, keep quiet. Many presentations are quite unsubstantial and need a flashy presenter. This doesn’t apply to us.

The content must be tailored to the audience. What knowledge can you take for granted? Underestimate the knowledge, but never underestimate the intelligence of your audience!

Fill your presentation! Every minute they listen to you, should be worth it. Every sentence must be important.

You often hear the first n seconds are important. They are not. Nobody will leave the room after 60 seconds, but often i know after 60 seconds, whether the speaker intends to fill or use his time.

3. Slow and clear

We’re talking technical presentations. Not wedding oration, not sales pitch, not advertisement, not political speech. This means to omit the filling stuff and go right to the core. This should be also true for the other occasions, but it’s essential here.

Don’t say the same three times in a row with different phrasing. It is better

to speak

slooowly

and clearly

one sentence

after another.

Don’t read your content. Not from paper, not from beamer, not from screen. You have practiced enough to know your text by heart, haven’t you?

4. A good presentation has a climax

A good presentation has one -exactly one- climax. Try to summarize your content into one sentence!

Now minimize that sentence! It should have no comma and no “and”. Imagine your audience would memorize only one sentence from your talk – what would that be? You can say this. “If you keep just one thing in mind from my talk, keep this: A good presentation is hard work, no trick.

A good presentation has one -exactly one- climax. Don’t fear repetitions in this case. A good presentation has one -exactly one- climax.

The climax determines the rest of the content. Thus if you have your climax, you have a criteria, where you could shorten your talk.

5. Humor is permitted

Yes, you can joke. A funny picture to lead to another topic is permitted, as long as it isn’t too much and on topic.

Don’t laugh yourself! A speaker better has a wry sense of humor.

6. Slide design

You can find good tips at Presentation Zen. A nice rule of thumb is 6×6, though i favor 1×6.

Especially with a technical topic one is ensnared to use bullet points. It doesn’t help. It doesn’t stick. As the speaker, you will read the list point by point, with some intermediary “and” and “uh”, and bore the audience. Do it like Steve, not like Bill!

Animations? Cease and desist!

7. Darned technology

Beamers video projectors and laptops sometimes don’t get along with each other. Computers break. Shit happens.

Show up early and test the real equipment! Don’t trust this test and always carry an USB stick and your slides in pdf form with you.

Live demos are risky because of this. Sometimes it is worth this risk, sometimes it is not.

If it breaks, it’s your fault. Maybe it isn’t, but from the audiences point of view, it’s only you on stage. That directly leads to the next point:

8. Don’t apologize

Who is on stage doesn’t apologize. At least don’t say more than a quick “sorry”.

It doesn’t matter who or what is at fault. It is the responsibility of the speaker to cope with it. It only hurts your presentation in the end.

If you are quick-witted, you may joke about yourself, but return to the agenda as soon as possible!

9. Practice, practice, practice

You can’t practice enough.

The only exception is that you sound like you have practiced more than enough.

If you haven’t practiced enough , you can’t watch your audience. You can read people, whether they have understood, what you just said, or whether you should repeat that. Eye contact happens automatically. Even the “uh” will disappear.

A good presentation is hard work, no trick.

Published in: on September 12, 2007 at 9:09 am  Comments (25)  

Computer science vs software engineering

There is much debate about splitting computer science and software engineering. Computer science being the theoretical stuff and software engineering the practical, but is that feasible?

bridge constructionWe probably agree that software development should be engineering. Building a software application should be like building a bridge. The problem is that it is not.

Joel Spolsky talks about three phases in software development:

  1. Design needs an artist
  2. Building needs an engineer
  3. Debugging needs a scientist

When the building part is already engineering, we just need to figure out, whether design and debugging can be made engineering or (mostly) optional. If you use a framework like Rails, you don’t need to design (apart from CSS and URLs). Type checking and verification could help to make debugging disappear.

A software developer describes the requirements for a job as a software developer:

  1. communication skills “The best developers are often the ones who can explain problems and solutions the most clearly to others”
  2. teams “Very few developers really work alone”
  3. analytical skills, particularly around ambiguous problems “It’s important that developers understand the intention of what they’re being asked to do as well as the implications of a solution they’re thinking of and can weight and communicate these”
  4. development processes “Not a theoretical one—they need to work on teams that use formal, top-down development process, agile development, teams with other developers, teams with test processes, and so on”
  5. an ability to learn on the fly
  6. competence in several programming languages “C++ is typically a must; C# or some other managed-code language is also mandatory, competence in one dynamic language, such as JavaScript, should also be present and the graduate should have the ability to know which to use when.”

No need for insight into algorithms or math. This may help, but isn’t mandatory.

There is still a big part, where computer science and software engineering intersect. Somebody has to build the standard and extension libraries. Somebody has to understand the math for multimedia and implement it.

For the common (web or desktop) application development, you don’t need to study computer science. This leads me to the conclusion that a software engineering education could be separated from general computer science. Do you agree?

Published in: on September 10, 2007 at 9:09 am  Comments (5)  

More women in computer science

That post about the Top 3 female computer scientists was more controversial, than i had thought. Maybe i shouldn’t have included Culver. Nevertheless i got an extended list of women from the comments. Thanks for your feedback!

Together with Alan Kay Adele Goldberg developed the Smalltalk programming system and was on the front of object-oriented programming. She currently works at her startup, developing intranet knowledge management software.

Monica LamMonica Lam is a professor at Stanford and an author of the third edition of the Dragon book. She supposably is one of the Top 50 most cited computer scientists. One of her current projects at moka5 is a portable image for secure computing called LivePC.

Radia Perlman invented the spanning tree algorithm for efficient and robust network routing. She is active in network and security research at Sun. (interview)

Together with Chuck Moore Elizabeth Rather developed and promoted the Forth programming language.

Barbara Liskov was the first women to get a Phd in computer science in the US. She currently works at the Programming Methodology Group at MIT.

Hedy LamarrHere comes the only women who isn’t really a computer scientist. Hedy Lamarr invented the concept of frequency hopping, leading to CDMA. Read her story! She flew from her first husband and became an actress. She shocked the audience with a nude scene and later got a star on Hollywood Walk of Fame.

Irene Greif had a key role in the development of Lotus. She “brought a more user-friendly perspective to the field, bringing social scientists and computer scientists together for the first time“.

Pat Selinger built the first practical relational database at IBM. She innovated at cost-based query optimization for relational databases.

I didn’t include women like Rebecca Wirfs-Brock. They are programmers or hackers, but not computer scientists.

Published in: on September 7, 2007 at 3:48 pm  Comments (5)  

Advice on writing a thesis

There is another Andreas out there and he wrote a nice post on How to write a thesis. He knows, because he just wrote his Master Thesis on the social dynamics of the Ubuntu open source community.

His advice is quite open, like “Be overly pedagogical!” or “Hack the data!”. Enjoy!

Published in: on September 7, 2007 at 8:08 am  Leave a Comment  

Top 3 female computer scientists

Computer science needs more women, but it’s not as there haven’t been some already. I proudly present the three most important female computer scientists.

Ada LovelaceIn 1815, a time when women were discouraged to participate in science, Ada Lovelace was born as the daughter of the poetic Lord Byron. Her mother got her homeschooled in math and science. When she was 27, she translated an article about Babbages Analytical Engine and added a description to compute Bernoulli numbers with it. The Right Honourable Augusta Ada, Countess of Lovelace wrote the first program in history.

Ten years later the “Enchantress of Numbers”, as Babbage called her, died from cancer. She envisioned the use of computers back then. Maybe Babbage could have promoted his machines through her writing skills, if she had lived longer. The computer revolution could have taken place a hundred years earlier.

A hundred years later a women called Grace Murray Hopper joined the US Navy. There she worked on some early computers and created the term “debugging”, when they found a moth in the computer relays. Shortly leaving the army she helped to build the first commercial computer, the UNIVAC. The Cobol programming language is mainly based on her philosophy to use english instead of machine language. “Amazing Grace”, as she was called sometimes, was a good presenter, often getting standing ovations after lectures.

Leah Culver planning her laptop etchingThen there is Leah Culver. Well, Leah didn’t make an important contribution to computer science apart from getting a degree. She just had this cool idea to laser-etch her laptop and got declared the sexiest geek of 2006, what gives me the possibility to catch you with a sexy picture.

The first women to win the Turing Award (the Nobel prize of computer science) is Frances Allen. “Fran” is a pioneer in the field of optimzing compilers. Her current title is IBM Fellow Emerita, “a position at IBM, which doesn’t require or allow any useful work, in terms of strategies in the company’s current business.”

Do you agree with my placement? Whom would you put at place four and five?

Update: more women in computer science

Published in: on September 6, 2007 at 3:04 pm  Comments (19)  

About the most important problem in computer science

This blog was founded to collect answers to one question. Richard Hamming asked it in his speech “You and your research”. His talk was more general, but it is a really good question and i applied it to my field.

What are most important problems in computer science?

I sent out lots of email to famous and not so famous people. You can read all replies on their own, but this is my try to summarize them all. I don’t think i’ll get much more responses to the emails, i sent for now.

The big problem is complexity. Computers are a bicycle for your mindA Computer is like a bicycle for our minds, so we can tackle more complex problems. The tools to these problems naturally are complex, so users get easily confused. The results are buggy programs, distressed users and wierd interfaces. “In spite of great advances, programming is still too difficult and machines are still too hard to use“, Kernighan wrote.

Prof. Tichy maps out the goal like this:

When everybody doesn’t just use a PC, but commands it [...]. Only then computer science has fullfilled its potential, which is to free humanity from tiresome, boring, error-prone and dangerous work.

There are some really annoying problems on the way to simplicity. Prof. Adleman mentioned the famous P=NP problem. Joe Armstrong remarked that we don’t really know how to store and how to find things. For Prof. McCarthy the tangible path is “getting interactive provers for showing that programs meet their specifications and formalizing common sense knowledge and reasoning in mathematical logic“, thus improving our mind tools.

It’s not easy to discover how the computer can work for us. If we have figured this out, we need to “make everything as simple as possible, but not simpler” (Einstein), because “simplicity is the ultimate sophistication” (da Vinci).

We’re still working on operating systems, compilers, languages and algorithms, because haven’t got them right so far.

As a student of computer science i want to find my personal attack vector to the big problem, but how to find that? Philip Greenspun presented a nice thought experiment: The Fantasy Research Lab

Do you miss anyones reply? Tell me! I’ll try to contact them. Also write me your own answer!

Published in: on September 5, 2007 at 11:11 am  Comments (8)  

Read also: Good Math, Bad Math

Everybody gets a Niche in the blogosphere. With me in the computer science Niche is Good Math, Bad Math by Mark Chu-Carroll. He blogs for two reasons:

  1. To ramble about the beauty of mathematics, and try to share enthusiasm for the subject.
  2. To track down the bozos who use bad math to lie, distort reality, and in general support bad arguments; demonstrate their errors and their dishonesty; and generally mock them.

The “bozos” are mostly christian creationists and he really isn’t nice to them. On the other hand he just points out factual and logical errors.

I especially recommend his posts in “good math” category. He posts for example about graph theory, fractals and built a little programming language on the π-calculus.

Way to go, Mark!

Published in: on September 4, 2007 at 11:11 am  Comments (1)  
Follow

Get every new post delivered to your Inbox.