Mai multe detalii aici.
Dacă nu merge pagina (microsoft technology ;D), fac paste la interviu mai jos:
Sunday, August 19, 2007: Did Microsoft's Men In Black ever met Linus Torvalds? But why is he so critical of GPLv3? Why does he slam Subversion? What would happen to the kernel development if he chooses to do something else more important? These are some of the questions Linux/open source community from around the globe wanted to ask Linus. And, here is Linus candid and blunt, and at times diplomatic. Check if the question you wanted to ask to the father of Linux is here and what does he have to say...
Q: What are the future enhancements/paths/plans for the Linux kernel? --Subramani R
Linus: I've never been much of a visionary -- instead of looking at huge plans for the future, I tend to have a rather short timeframe of 'issues in the next few months'. I'm a big believer in that the 'details' matter, and if you take care of the details, the big issues will end up sorting themselves out on their own.
So I really don't have any great vision for what the kernel will look like in five years -- just a very general plan to make sure that we keep our eye on the ball. In fact, when it comes to me personally, one of the things I worry about the most isn't even the technical issues, but making sure that the 'process' works, and that people can work well with each other.
Q: How do you see the relationship of Linux and Solaris evolving in the future? How will it benefit the users?
Linus: I don't actually see a whole lot of overlap, except that I think Solaris will start using more of the Linux user space tools (which I obviously don't personally have a lot to do with -- I really only do the kernel). The Linux desktop is just so much better than what traditional Solaris has, and I expect Solaris to move more and more towards a more Linux-like model there.
On the pure kernel side, the licensing differences mean that there's not much cooperation, but it will be very interesting to see if that will change. Sun has been making noises about licensing Solaris under the GPL (either v2 or v3), and if the licence differences go away, that could result in some interesting technology. But I'm taking a wait-and-see attitude to that.
Q: Now that the GPLv3 has been finalised and released, do you foresee any circumstance that would encourage you to begin moving the kernel to it? Or, from your perspective, is it so bad that you would never consider it? -- Peter Smith / Naveen Mudunuru.
Linus: I think it is much improved over the early drafts, and I don't think it's a horrible licence. I just don't think it's the same kind of 'great' licence that the GPLv2 is.
So in the absence of the GPLv2, I could see myself using the GPLv3. But since I have a better choice, why should I?
That said, I try to always be pragmatic, and the fact that I think the GPLv3 is not as good a licence as the GPLv2 is not a 'black and white' question. It's a balancing act. And if there are other advantages to the GPLv3, maybe those other advantages would be big enough to tilt the balance in favour of the GPLv3.
Quite frankly, I don't really see any, but if Solaris really is to be released under the GPLv3, maybe the advantage of avoiding unnecessary non-compatible licence issues could be enough of an advantage that it might be worth trying to re-license the Linux kernel under the GPLv3 too.
Don't get me wrong -- I think it's unlikely. But I do want to make it clear that I'm not a licence bigot, per se. I think the GPLv2 is clearly the better licence, but licences aren't everything.
After all, I use a lot of programs that are under other licences. I might not put a project I start myself under the BSD (or the X11-MIT) licence, but I think it's a great licence, and for other projects it may well be the right one.
Q: Currently are there any Indians who you'd like to highlight as key contributors to the Linux kernel?
Linus: I have to admit that I don't directly work with anybody that I actually realize as being from India. That said, I should clarify a bit: I've very consciously tried to set up the kernel development so that I don't end up working personally with a huge number of people.
I have this strong conviction that most humans are basically wired up to know a few people really well (your close family and friends), and I've tried to make the development model reflect that: with a 'network of developers', where people interact with maybe a dozen other people they trust, and those other people in turn interact with 'their' set of people they trust.
So while I'm in occasional contact with hundreds of developers who send me a random patch or two, I've tried to set up an environment where the bulk of what I do happens through a much smaller set of people that I know, just because I think that's how people work. It's certainly how I like to work.
Also, in all honesty, I don't even know where a lot of the people I work with live. Location ends up being pretty secondary. So while I'm pretty sure that none of the top 10-15 I work with most closely, are in India, maybe after this goes public, it might get pointed out that there is actually somebody from there!
Q: Since the Linux Kernel Development depends so heavily on you, how do you plan to organise/reorganise it for it to continue progressing without you, in case you decide to dedicate more time to your own life and family?
Linus: I've long since come to the realisation that Linux is much bigger than me. Yes, I'm intimately involved in it still, and I have a fairly large day-to-day impact on it, and I end up being the person who, in some sense, acts as the central point for a lot of kernel activities; but no -- I wouldn't say that Linux 'depends heavily' on me.
So if I had a heart attack and died tomorrow (happily not likely: I'm apparently healthy as anything), people would certainly notice, but there are thousands of people involved in just the kernel, and there're more than a few that could take over for me with little real confusion.
Q: India is one of the major producers of software engineers, yet we don't contribute much to the Linux domain. What do you think is keeping Indians from becoming proactive on that front? How do you feel we could encourage Indians to get involved and contribute heavily? You have a fan following in India; could your iconic image be used to inspire enthusiasts? -- Bhuvaneswaran Arumugam.
Linus: This is actually a very hard question for me to answer. Getting into open source is such a complicated combination of both infrastructure (Internet access, education, you name it), flow of information and simply culture that I can't even begin to guess what the biggest stumbling block could be.
In many ways, at least those with an English-speaking culture in India should have a rather easy time getting involved with Linux and other open source projects, if only thanks to the lack of a language barrier. Certainly much easier than many parts of Asia or even some parts of Europe.
Of course, while that is a lot of people, it's equally obviously not the majority in India, and I personally simply don't know enough about the issues in India to be able to make an even half-way intelligent guess about what the best way forward is. I suspect that an enthusiastic local user community is always the best way, and I think you do have that.
As to my 'iconic image', I tend to dislike that part personally. I'm not a great public speaker, and I've avoided travelling for the last several years because I'm not very comfortable being seen as this iconic 'visionary'. I'm just an engineer, and I just happen to love doing what I do, and to work with other people in public.
Q: What would be a good reason for you to consider visiting India? -- Frederick [FN] Noronha.
Linus: As mentioned in the first answer, I absolutely detest public speaking, so I tend to avoid conferences, etc. I'd love to go to India for a vacation some day, but if I do, I'd likely just do it incognito -- not tell anybody beforehand and just go as a tourist to see the country!
Q: Recently, you seemed to slam Subversion and CVS, questioning their basic architecture. Now that you've got responses from the Subversion community, do you stand corrected, or are you still unconvinced? B Arumugam.
Linus: I like making strong statements, because I find the discussion interesting. In other words, I actually tend to 'like' arguing. Not mindlessly, but I certainly tend to prefer the discussion a bit more heated, and not just entirely platonic.
And making strong arguments occasionally ends up resulting in a very valid rebuttal, and then I'll happily say: "Oh, ok, you're right."
But no, that didn't happen on SVN/CVS. I suspect a lot of people really don't much like CVS, so I didn't really even expect anybody to argue that CVS was really anything but a legacy system. And while I've gotten a few people who argued that I shouldn't have been quite so impolite against SVN (and hey, that's fair -- I'm really not a very polite person!), I don't think anybody actually argued that SVN was 'good'.
SVN is, I think, a classic case of 'good enough'. It's what people are used to, and it's 'good enough' to be used fairly widely, but it's good enough in exactly the sense DOS and Windows were 'good enough'. Not great technology, just very widely available, and it works well enough for people and looks familiar enough that people use it. But very few people are 'proud' of it, or excited about it.
Git, on the other hand, has some of the 'UNIX philosophy' behind it. Not that it is about UNIX, per se, but like original UNIX, it had a fundamental idea behind it. For UNIX, the underlying philosophy was/is that, "Everything is a file." For git, it's, Everything is just an object in the content-addressable database."
Q: Is having so many distros a good or bad idea? Choice is fine, but one does not need to be pampered with choices. Instead of so many man hours being spent in building hundreds of distros, wouldn't it be easier to get into the enterprise and take on the MS challenge if people could come together and support fewer distros (1 for each use maybe)? What's your view on that? -- Srinivasan S.
Linus: I think having multiple distros is an inevitable part of open source. And can it be confusing? Sure. Can it be inefficient? Yes. But I'd just like to compare it to politics: 'democracy' has all those confusing choices, and often none of the choices is necessarily what you 'really' want either, and sometimes you might feel like things would be smoother and more efficient if you didn't have to worry about the whole confusion of voting, different parties, coalitions, etc.
But in the end, choice may be inefficient, but it's also what keeps everybody involved at least 'somewhat' honest. We all probably wish our politicians were more honest than they are, and we all probably wish that the different distros sometimes made other choices than they do, but without that choice, we'd be worse off.
Q: Why do you think CFS is better than SD?
Linus: Part of it is that I have worked with Ingo [Molnar] for a long time, which means that I know him, and know that he'll be very responsive to any issues that come up. That kind of thing is very important.
But part of it is simply about numbers. Most people out there actually say that CFS is better than SD. Including, very much, on 3D games (which people claimed was a strong point of SD).
At the same time, though, I don't think any piece of code is ever ''perfect''. The best thing to happen is that the people who want to be proponents of SD will try to improve that so much that the balance tips over the other way -- and we'll keep both camps trying interesting things because the internal competition motivates them.
Q: In a talk you had at Google about git, someone asked you how you would take an extremely large code base that is currently handled with something centralised and transition to git without stopping business for six months. What was your response to that? -- Jordan Uggla.
Linus: Ahh. That was the question where I couldn't hear the questioner well (the questions were much more audible in the recordings), and I noticed afterwards, when I went back and listened to the recorded audio, that I didn't answer the question he asked, but the question I thought he'd asked.
Anyway, we do have lots of import tools, so that you can actually just import a large project from just about any other previous SCM into git. But the problem, of course, often doesn't end up being the act of importing itself, but just having to 'get used to' the new model!
And quite frankly, I don't think there is any other answer to that 'get used to it' but to just start out and try it. You obviously do not want to start out by importing the biggest and most central project you have; that would indeed just make everything come to a standstill, and make everybody very unhappy indeed.
So nobody sane would advocate moving everything over to git overnight, and forcing people to change their environment. No. You'd start with a smaller project inside a company, perhaps something that just one group mostly controls and maintains, and start off by converting that to git. That way you get people used to the model, and you start having a core group with the knowledge about how git works and how to use it within the company.
And then you just extend on that. Not in one go. You'd import more and more of the projects -- even if you have the 'one big repository' model at your company; you also almost certainly have that repository as a set of modules, because having everybody check out everything is just not a workable mode of operation (unless 'everything' is just not very large).
So you'd basically migrate one module at a time, until you get to the point where you're so comfortable with git that you can just migrate the rest (or the 'rest' is so legacy that nobody even cares).
And one of the nice features of git is that it actually plays along pretty well with a lot of other SCMs. That's how a lot of git users use it: 'they' may use git, but sometimes the people they work with don't even realise, because they see the results of it propagated into some legacy SCM.
Q: Did they ever experiment with alternate instruction set implementations at Transmeta? [Transmeta Crusoe chip seemed like a very soft CPU -- reminding one of Burroughs B1000 interpretive machine, which actually implemented multiple virtual machines. There was one for system software, another for Cobol, another for Fortran; If that is correct, then one could implement Burroughs 6/7000 or HP3000 like stack architecture on the chip or an instruction set suitable for JVM, etc] -- Anil Seth.
Linus: We did indeed have some alternate instruction set, and while I still am not really supposed to talk about it, I can say that we did have a public demonstration of mixing instruction sets. We had a technology showcase
where you could run x86 instructions side-by-side with Java byte code (actually, it was a slightly extended pico-java, iirc).
I think the app we showed running was running DOOM on top of Linux, where the Linux parts were a totally standard x86 distribution, but the DOOM binary was a specially compiled version where part of the game was actually compiled pico-Java. And the CPU ended up running them both the same way -- as a JIT down to the native VLIW instruction set.
(The reason for picking DOOM was just that source code was available, and the core parts of the game were small enough that it was easy to set it up as a demonstration -- and it was obviously visually interesting.)
There were more things going on internally, but I can't really talk about them. And I wasn't actually personally involved with the Java one either.
Q: 386BSD, from which NetBSD, FreeBSD and OpenBSD were derived, was there well before Linux, but Linux spread much more than 386BSD and its derivatives. How much of this do you attribute to the choice of the licence and how much to the development process you chose? Don't you think that the GPLv3 protects the freedom that has bred Linux better than the BSDs till now, more than the GPLv2 can? -- Tiziano Mosconi from Italy.
Linus: I think there's both a licence issue, and a community and personality issue. The BSD licences always encouraged forking, but also meant that if somebody gets really successful and makes a commercial fork, you cannot necessarily join back. And so even if that doesn't actually happen (and it did, in the BSD cases -- with BSDi), people can't really 'trust' each other as much.
In contrast, the GPLv2 also encourages forking, but it not only encourages the branching off part, it also encourages (and 'requires') the ability to merge back again. So now you have a whole new level of trust: you 'know' that everybody involved will be bound by the licence, and won't try to take advantage of you.
So I see the GPLv2 as the licence that allows people the maximum possible freedom within the requirement that you can always join back together again from either side. Nobody can stop you from taking the improvements to the source code.
So is the BSD licence even more 'free'? Yes. Unquestionably. But I just wouldn't want to use the BSD licence for any project I care about, because I not only want the freedom, I also want the trust so that I can always use the code that others write for my projects.
So to me, the GPLv2 ends up being a wonderful balance of 'as free as you can make it', considering that I do want everybody to be able to trust so that they can always get the source code and use it.
Which is why I think the GPLv3 ends up being a much less interesting licence. It's no longer about that trust about "getting the source code back"; it has degenerated into a "I wrote the code, so I should be able to control how you use it."
In other words, I just think the GPLv3 is too petty and selfish. I think the GPLv2 has a great balance between 'freedom' and 'trust'. It's not as free as the BSD licences are, but it gives you peace of mind in return, and matches what I consider 'tit-for-tat': I give source code, you give me source code in return.
The GPLv3 tries to control the 'use' of that source code. Now it's, "I give you my source code, so if you use it, you'd better make your devices hackable by me." See? Petty and small-minded, in my opinion.
Q: Slowly but steadily, features of the -rt tree are getting integrated into the mainline. What are your current thoughts regarding a merger of the remaining -rt tree into the mainline (and I'm not talking about the CFS)? -- Wal, Alex van der.
Linus: I won't guarantee that everything from -rt will 'ever' be merged into the standard kernel (there may be pieces that simply don't end up making sense in the generic kernel), but yes, over the years we've actually integrated most of it, and the remaining parts could end up making it one of these days.
I'm a big fan of low-latency work, but at the same time I'm pretty conservative, and I pushed back on some of the more aggressive merging, just because I want to make sure that it all makes sense for not just some extreme real time perspective, but also for 'normal' users who don't need it. And that explains why the process has been a pretty slow but steady trickle of code that has gotten merged, as it was sufficiently stable and made sense.
That, by the way, is not just an -rt thing; it's how a lot of the development happens. -rt just happens to be one of the more 'directed' kernel projects, and one where the main developer is pretty directly involved with the normal kernel too. But quite often the migration of other features (security, virtual memory changes, virtualisation, etc) follows a similar path: they get written up in a very targeted environment, and then pieces of the features get slowly but surely merged into the standard kernel.
Q: I'm very curious about what the future holds for file systems in the kernel. What do you think about Reiser4, XFS4, ZFS and the new project founded by Oracle? ZFS has been receiving a lot of press these days. Reiser4 delivers very good benchmarks, and xfs4 is trying to keep up, whereas the one by Oracle has a lot of the same specs as Sun's ZFS. Where are we heading? Which FS looks the most promising in your opinion? -- Ayvind Binde.
Linus: Actually, just yesterday we had a git performance issue, where ZFS was orders of magnitude slower than UFS for one user (not under Linux, but git is gaining a lot of traction even outside of kernel development). So I think a lot of the 'new file system' mania is partly fed by knowing about the issues with old filesystems, and then the (somewhat unrealistic) expectation that a 'new and improved' filesystem will make everything perfect.
In the end, this is one area where you just let people fight it out. See who comes out the winner -- and it doesn't need to be (and likely will not) be a single winner. Almost always, the right choice of file system ends up depending on the load and circumstances.
One thing that I'm personally more excited about than any of the filesystems you mention is actually the fact that Flash-based hard disks are quickly becoming available even for 'normal' users. Sure, they're still expensive (and fairly small), but Flash-based storage has such a different performance profile from rotating media, that I suspect that it will end up having a large impact on filesystem design. Right now, most filesystems tend to be designed with the latencies of rotating media in mind.
Q: The operating system is becoming less and less important. You have said several times that the user is not supposed to 'see' the operating system at all. It is the applications that matter. Browser-based applications, like Google's basic office applications, are making an impact. Where do you think operating systems are headed?
Linus: I don't really believe in the 'browser OS', because I think that people will always want to do some things locally. It might be about security, or simply about privacy reasons. And while connectivity is widely available, it certainly isn't 'everywhere'.
So I think the whole 'Web OS' certainly is part of the truth, but another part that people seem to dismiss is that operating systems have been around for decades, and it's really a fairly stable and well-known area of endeavour. People really shouldn't expect the OS to magically change: it's not like people were 'stupid' back in the 60s either, or even that hardware was 'that' fundamentally different back then!
So don't expect a revolution. I think OSs will largely continue to do what they do, and while we'll certainly evolve, I don't think they'll change radically. What may change radically are the interfaces and the things you do on top of the OS (and certainly the hardware beneath the OS will continue to evolve too), and that's what people obviously care about.
The OS? It's just that hidden thing that makes it all possible. You really shouldn't care about it, unless you find it very interesting to know what is really going on in the machine.
Q: The last I heard, you were using a PPC G4/5 for your main personal machine -- what are you using now, and why?
Linus: I ended up giving up on the PowerPC, since nobody is doing any workstations any more, and especially since x86-64 has become such an undeniable powerhouse. So these days, I run a bog-standard PC, with a normal Core 2 Duo on it.
It was a lot of fun to run another architecture (I ran with alpha as my main architecture way back then, for a few years, so it wasn't the first time either), but commodity CPUs is where it is at. The only thing that I think can really ever displace the x86 architecture would come from below, i.e., if something makes us not use x86 as our main ISA in a decade, I think it would be ARM, thanks to the mobile device market.
Q: What does Linux mean to you -- a hobby, philosophy, the meaning of life, a job, the best OS, something else...?
Linus: It's some of all of that. It's a hobby, but a deeply meaningful one. The best hobbies are the ones that you care 'really' deeply about. And these days it's obviously also my work, and I'm very happy to be able to combine it all.
I don't know about a 'philosophy', and I don't really do Linux for any really deeply held moral or philosophical reasons (I literally do it because it's interesting and fun), but it's certainly the case that I have come to appreciate the deeper reasons why I think open source works so well. So I may not have started to do Linux for any such deep reasons, and I cannot honestly say that that is what motivates me, but I do end up thinking about why it all works.
Q: Did Microsoft's 'Men in Black' ever talk to you? -- Zidagar - Antonio Parrella
Linus: I've never really talked to MS, no. I've occasionally been at the same conferences with some MS people (I used to go to more conferences than I do these days), but I've never really had anything to do with them. I think there is a mutual wariness.