Friday, October 6, 2017

Does Agile work ?

Read a nice article on why Agile does not always work.


Tuesday, October 3, 2017

Bad movies


I’ve been watching lesser and lesser movies nowadays, partly because I hardly get the time. But mostly because most of the movies coming out nowadays are garbage.  There are tricks to identify a bad movie.  One trick is asking someone who watched it. If someone tells you it is ok for a one time watch, then it is a bad movie.

Another trick is to check the wikipedia page of that movie sometime after it comes out. If its not updated, specially the plot part, then it is really bad. I mean really bad. Run ! Escape !!

But there are even worse movies coming out, which for some reason , people seem to enjoy. And write about. I watched a few of them this year, because the are available on DVD rip sites. Like the latest Transformer movie, the Last Knight. I wonder what the hell were the creators, thinking ? Right, they were not thinking. Its so bad, I deleted my download the second I skipped till the end. Same for baywatch, the new mummy reboot and furious 8. Horrible horrible movies.

Its even worse for bollywood. That Harry-Sejal bullshit, Raabta, Lipstick-Burka…eww…there are just ones I watched. There are far outrageous crimes being committed. Can’t believe they remade that bad Tamil movie as well.

What I cannot fathom is why are the public still watching these kind of movies. They still make money , some of them still get remakes and sequels. They get prime air time when they debut on cable. The makers are able to make a quick buck before the word gets out, its almost like a huge ponzi scheme. And hardly anyone seems to care.

I guess its a little difficult when there aren’t many options. I can’t say for sure, but I think this is one of the worst years for movie entertainment.



So bad, I had to go back to my collection of classics, just to stay sane.

Plus point: I came across an Indian version of the Golden Raspberry awards for bad movies, termed the Golden Kela awards.

Thursday, September 28, 2017

Id's Wizardy

All Images: ID Software

Over the last 12 years, the evolving realism of Id Software's graphics has set the bar for the industry. Among the games [bottom to top, left]: Commander Keen (1990); Hovertank (1991); Wolfenstein 3D (1992); Doom (1993); Quake (1996); and Return to Castle Wolfenstein (2001). Click on the image for a larger view.

It's after midnight when the carnage begins. Inside a castle, soldiers chase Nazis through the halls. A flame-thrower unfurls a hideous tongue of fire. This is Return to Castle Wolfenstein, a computer game that's as much a scientific marvel as it is a visceral adventure. It's also the latest product of Id Software (Mesquite, Texas). Through its technologically innovative games, Id has had a huge influence on everyday computing, from the high-speed, high-color, and high-resolution graphics cards common in today's PCs to the marshalling of an army of on-line game programmers and players who have helped shape popular culture.

Id shot to prominence 10 years ago with the release of its original kill-the-Nazis-and-escape game, Wolfenstein 3D. It and its successors, Doom and Quake, cast players as endangered foot soldiers, racing through mazes while fighting monsters or, if they so chose, each other. To bring these games to the consumer PC and establish Id as the market leader required skill at simplifying difficult graphics problems and cunning in exploiting on-going improvements in computer graphics cards, processing power, and memory size [see illustration, Driven]. To date, their games have earned over US $150 million in sales, according to The NPD Group, a New York City market research firm.

It all began with a guy named Mario

The company owes much of its success to advances made by John Carmack, its 31-year-old lead programmer and cofounder who has been programming games since he was a teenager.

Back in the late 1980s, the electronic gaming industry was dominated by dedicated video game consoles. Most game software was distributed in cartridges, which slotted into the consoles, and as a consequence, writing games required expensive development systems and corporate backing.

The only alternative was home computer game programming, an underworld in which amateurs could develop and distribute software. Writing games for the low-powered machines required only programming skill and a love of gaming.

Four guys with that passion were artist Adrian Carmack; programmer John Carmack (no relation); game designer Tom Hall; and programmer John Romero. While working together at Softdisk (Shreveport, La.), a small software publisher, these inveterate gamers began moonlighting on their own titles.

At the time, the PC was still largely viewed as being for business only. It had, after all, only a handful of screen colors and squeaked out sounds through a tiny tinny speaker. Nonetheless, the Softdisk gamers figured this was enough to start using the PC as a games platform.

First, hey decided to see if they could recreate on a PC the gaming industry's biggest hit at the time, Super Mario Brothers 3. This two-dimensional game ran on the Super Nintendo Entertainment System, which drove a regular television screen. The object was to make a mustached plumber, named Mario, leap over platforms and dodge hazards while running across a landscape below a blue sky strewn with puffy clouds. As Mario ran, the terrain scrolled from side to side to keep him more or less in the middle of the screen. To get the graphics performance required, the Nintendo console resorted to dedicated hardware. "We had clear examples of console games [like Mario] that did smooth scrolling," John Carmack says, "but [in 1990] no one had done it on an IBM PC."

After a few nights of experimentation, Carmack figured out how to emulate the side-scrolling action on a PC. In the game, the screen image was drawn, or rendered, by assembling an array of 16-by-16-pixel tiles. Usually the on-screen background took over 200 of these square tiles, a blue sky tile here, a cloud tile there, and so on. Graphics for active elements, such as Mario, were then drawn on top of the background.

Any attempt to redraw the entire background every frame resulted in a game that ran too slowly, so Carmack figured out how to have to redraw only a handful of tiles every frame, speeding the game up immensely. His technique relied on a new type of graphics card that had become available, and the observation that the player's movement occurred incrementally, so most of the next frame's scenery had already been drawn.

The new graphics cards were known as Enhanced Graphics Adapter (EGA) cards. They had more on-board video memory than the earlier Color Graphics Adapter (CGA) cards and could display 16 colors at once, instead of four. For Carmack, the extra memory had two important consequences. First, while intended for a single relatively high-resolution screen image, the card's memory could hold several video screens' worth of low-resolution images, typically 300 by 200 pixels, simultaneously, good enough for video games. By pointing to different video memory addresses, the card could switch which image was being sent to the screen at around 60 times a second, allowing smooth animation without annoying flicker. Second, the card could move data around in its video memory much faster than image data could be copied from the PC's main memory to the card, eliminating a major graphics performance bottleneck.

Carmack wrote a so-called graphics display engine that exploited both properties to the full by using a technique that had been originally developed in the 1970s for scrolling over large images, such as satellite photographs. First, he assembled a complete screen in video memory, tile by tile--plus a border one tile wide [see illustration, "Scrolling With the Action" ]. If the player moved one pixel in any direction, the display engine moved the origin of the image it sent to the screen by one pixel in the corresponding direction. No new tiles had to be drawn. When the player's movements finally pushed the screen image to the outer edge of a border, the engine still did not redraw most of the screen. Instead, it copied most of the existing image--the part that would remain constant--into another portion of video memory. Then it added the new tiles and moved the origin of the screen display so that it pointed to the new image .

Scrolling With the Action: For two-dimensional scrolling in his PC game, programmer John Carmack cheated a little by not always redrawing the background. He built the background of graphical tiles stored in video memory [left] but only sent part of the image to the screen [top left, inside orange border]. As the play character [yellow circle] moved, the background sent to the screen was adjusted to include tiles outside the border [see top right]. New background elements would be needed only after a shift of one tile width. Then, most of the background was copied to another region of video memory [see bottom right], and the screen image centered in the new background.

In short, rather than having the PC redraw tens of thousands of pixels every time the player moved, the engine usually had to change only a single memory address--the one that indicated the origin of the screen image--or, at worst, draw a relatively thin strip of pixels for the new tiles. So the PC's CPU was left with plenty of time for other tasks, such as drawing and animating the game's moving platforms, hostile characters, and the other active elements with which the player interacted.

Hall and Carmack knocked up a Mario clone for the PC, which they dubbed Dangerous Dave in Copyright Infringement. But Softdisk, their employer, had no interest in publishing what were then high-end EGA games, preferring to stick with the market for CGA applications. So the nascent Id Software company went into moonlight overdrive, using the technology to create its own side-scrolling PC game called Commander Keen. When it came time to release the game, they hooked up with game publisher Scott Miller, who urged them to go with a distribution plan that was as novel as their technology: shareware.

In the 1980s, hackers started making their programs available through shareware, which relied on an honor code: try it and if you like it, pay me. But it had been used only for utilitarian programs like file tools or word processors. The next frontier, Miller suggested, was games. Instead of giving away the entire game, he said, why not give out only the first portion, then make the player buy the rest? Id agreed to let Miller's company, Apogee, release the game. Prior to Commander Keen, Apogee's most popular shareware game had sold a few thousand copies. Within months of Keen's release in December 1990, the game had sold 30 000 copies. For the burgeoning world of PC games, Miller recalls, "it was a little atom bomb."

Going for depth

Meanwhile programmer Carmack was again pushing the graphics envelope. He had been experimenting with 3-D graphics ever since junior high school, when he produced wire-frame MTV logos on his Apple II. Since then, several game creators had experimented with first-person 3-D points of view, where the flat tiles of 2-D games are replaced by polygons forming the surfaces of the player's surrounding environment. The player no longer felt outside, looking in on the game's world, but saw it as if from the inside.

The results had been mixed, though. The PC was simply too slow to redraw detailed 3-D scenes as the player's position shifted. It had to draw lots of surfaces for each and every frame sent to the screen, including many that would be obscured by other surfaces closer to the player.

Carmack had an idea that would let the computer draw only those surfaces that were seen by the player. "If you're willing to restrict the flexibility of your approach," he says, "you can almost always do something better."

So he chose not to address the general problem of drawing arbitrary polygons that could be positioned anywhere in space, but designed a program that would draw only trapezoids. His concern at this time was with walls (which are shaped like trapezoids in 3-D), not ceilings or floors.

For his program, Carmack simplified a technique for rendering realistic images on then high-end systems. In raycasting, as it is called, the computer draws scenes by extending lines from the player's position in the direction he or she is facing. When it strikes a surface, the pixel corresponding to that line on the player's screen is painted the appropriate color. None of the computer's time is wasted on drawing surfaces that would never be seen anyway. By only drawing walls, Carmack could raycast scenes very quickly.

Carmack's final challenge was to furnish his 3-D world with treasure chests, hostile characters, and other objects. Once again, he simplified the task, this time by using 2-D graphical icons, known as sprites. He got the computer to scale the size of the sprite, depending on the player's location, so that he did not have to model the objects as 3-D figures, a task that would have slowed the game painfully. By combining sprites with raycasting, Carmack was able to place players in a fast-moving 3-D world. The upshot was Hovertank, released in April 1991. It was the first fast-action 3-D first-person action shooter for the PC.

Around this time, fellow programmer Romero heard about a new graphics technique called texture mapping. In this technique, realistic textures are applied to surfaces in place of their formerly flat, solid colors. in green slime in its next game, Catacombs 3D. While running through a maze, the player shot fireballs at enemy figures using another novelty--a hand drawn in the lower center of the screen. It was as if the player were looking down on his or her own hand, reaching into the computer screen. By including the hand in Catacombs 3D, Id Software was making a subtle, but strong, psychological point to its audience: you are not just playing the game--you're part of it.

Instant sensation

For Id's next game, Wolfenstein 3D, Carmack refined his code. A key decision ensured the graphics engine had as little work to do as possible: to make the walls even easier to draw, they would all be the same height.

This speeded up raycasting immensely. In normal raycasting, one line is projected through space for every pixel displayed. A 320-by-200-pixel screen image of the type common at the time required 64 000 lines. But because Carmack's walls were uniform from top to bottom, he had to raycast along only one horizontal plane, just 320 lines [see diagram, Raycasting 3-D Rooms].

Raycasting 3-D Rooms: To quickly draw three-dimensional rooms without drawing obscured and thus unnecessary surfaces, Carmack used a simplified form of raycasting, a technique used to reate realistic 3-D images. In raycasting, the computer draws scenes by extending lines from the player's viewpoint [top], through an imaginary grid, so that they strike the surfaces the player sees; only these surfaces get drawn. 
Carmack simplified things by keeping all the walls the same height. This allowed him to extend the rays from the player in just a single horizontal 2-D plan [middle] and scale the apparent height of the wall according to its distance from the player, instead of determining every point on the wall individually. The result is the final 3-D image of the walls [bottom]. Click on image for larger view.

With Carmack's graphics engine now blazingly fast, Romero, Adrian Carmack, and Hall set about creating a brutal game in which an American G.I. had to mow down Nazis while negotiating a series of maze-based levels. Upon its release in May 1992, Wolfenstein 3D was an instant sensation and became something of a benchmark for PCs. When Intel wanted to demonstrate the performance of its new Pentium chip to reporters, it showed them a system running Wolfenstein.

Wolfenstein also empowered gamers in unexpected ways--they could modify the game with their own levels and graphics. Instead of a Nazi officer, players could, for example, substitute Barney, the purple dinosaur star of U.S. children's television. Carmack and Romero made no attempt to sue the creators of these mutated versions of Wolfenstein, for, as hackers themselves, they couldn't have been more pleased.

Their next game, Doom, incorporated two important effects Carmack had experimented with in working on another game, Shadowcaster, for a company called Raven in 1992. One was to apply texture mapping to floors and ceilings, as well as to walls. Another was to add diminished lighting. Diminished lighting meant that, as in real life, distant vistas would recede into shadows, whereas in Wolfenstein, every room was brightly lit, with no variation in hue.

By this time, Carmack was programming for the Video Graphics Adapter (VGA) cards that had supplanted the EGA cards. VGA allowed 256 colors--a big step up from EGA's 16, but still a limited range that made it a challenge to incorporate all the shading needed for diminished lighting effects.

The solution was to restrict the palette used for the game's graphics, so that 16 shades of each of 16 colors could be accommodated. Carmack then programmed the computer to display different shades based on the player's location within a room. The darkest hues of a color were applied to far sections of a room; nearer surfaces would always be brighter than those farther away. This added to the moody atmosphere of the game.

Both Carmack and Romero were eager to break away from the simple designs used in the levels of their earlier games. "My whole thing was--let's not do anything that Wolfenstein does," Romero says. "Let's not have the same light levels, let's not have the same ceiling heights, let's not have walls that are 90 degrees [to each other]. Let's show off Carmack's new technology by making everything look different."

Profiting from improvements in computer speed and memory, Carmack began working on how to draw polygons with more arbitrary shapes than Wolfenstein's trapezoids. "It was looking like [the graphics engine] wouldn't be fast enough," he recalls, "so we had to come up with a new approach....I knew that to be fast, we still had to have strictly horizontal floors and vertical walls." The answer was a technique known as binary space partitioning (BSP). Henry Fuchs, Zvi Kedem, and Bruce Naylor had popularized BSP techniques in 1980 while at Bell Labs to render 3-D models of objects on screen.

A fundamental problem in converting a 3-D model of an object into an on-screen image is determining which surfaces are actually visible, which boils down to calculating: is surface Y in front of, or behind, surface X? Traditionally, this calculation was done any time the model changed orientation.

The BSP approach depended on the observation that the model itself is static, and although different views give rise to different images, there is no change in the relationships between its surfaces. BSP allowed the relationships to be determined once and then stored in such a way that determining which surfaces hid other surfaces from any arbitrary viewpoint was a matter of looking up the information, not calculating it anew.

BSP takes the space occupied by the model and partitions it into two sections. If either section contains more than one surface of the model, it is divided again, until the space is completely broken up into sections each containing one surface. The branching hierarchy that results is called a BSP tree and extends all the way from the initial partition of the space down to the individual elements. By following a particular path through the nodes of the stored tree, it is possible to generate key information about the relationships between surfaces in a specific view of the model.

What if, Carmack wondered, you could use a BSP to create not just one 3-D model of an object, but an entire virtual world? Again, he made the problem simpler by imposing a constraint: walls had to be vertical and floors and ceilings horizontal. BSP could then be used to divide up not the 3-D space itself, but a much simpler 2-D plan view of that space and still provide all the important information about which surfaces were in front of which [see diagram, Divide and Conquer].

Illustration: Armand Veneziano

Divide and Conquer: "Doom treated [the surfaces of the 3-D world] all as lines," Carmack says, "cutting lines and sorting lines is so much easier than sorting polygons....The whole point was taking BSP [trees] and applying them to...a plane, instead of to polygons in a 3-D world, which let it be drastically simpler."Click on the image for a larger view.

Doom was also designed to make it easy for hackers to extend the game by adding their own graphics and game-level designs. Networking was added to Doom, allowing play between multiple players over a local-area network and modem-to-modem competition.

The game was released in December 1993. Between the multiplayer option, the extensibility, the riveting 3-D graphics, and the cleverly designed levels, which cast the player as a futuristic space marine fighting against the legions of hell, it became a phenomenon. Doom II, the sequel, featured more weapons and new levels but used the same graphics engine. It was released in October 1994 and eventually sold more than 1 500 000 copies at about $50 each; according to the NPD Group, it remains the third best-selling computer game in history.

The finish line

In the mid-1990s, Carmack felt that PC technology had advanced far enough for him to finally achieve two specific goals for his next game, Quake. He wanted to create an arbitrary 3-D world in which true 3-D objects could be viewed from any angle, unlike the flat sprites in Doom and Wolfenstein. The solution was to use the power of the latest generation of PCs to use BSP to chop up the volume of a true 3-D space, rather than just areas of a 2-D plan view. He also wanted to make a game that could be played over the Internet.

For Internet play, a client-server architecture was used. The server--which could be run on any PC--would handle the game environment consisting of rooms, the physics of moving objects, player positions, and so on. Meanwhile, the client PC would be responsible for both the input, through the player's keyboard and mouse, and the output, in the form of graphics and sound. Being online, however, the game was liable to lags and lapses in network packet deliveries--just the thing to screw up a fast action game. To reduce the problem, Id limited the packet delivery method to only the most necessary information, such as a player's position.

"The key point was use of an unreliable transport for all communication," Carmack says, "taking advantage of continuous packet communication and [relaxing] the normal requirements for reliable delivery," such as handshaking and error correction. A variety of data compression methods were also used to reduce the bandwidth. The multiplayer friendliness of the game that emerged--Quake--was rewarded by the emergence of a huge online community when it was released in June 1996.

Looking good

Games in general drove the evolution of video cards. But multiplayer games in particular created an insatiable demand for better graphics systems, providing a market for even the most incremental advance. Business users are not concerned if the graphics card they are using to view their e-mail updates the screen 8 times a second while their neighbor's card allows 10 updates a second. But a gamer playing Quake, in which the difference between killing or being killed is measured in tenths of a second, very much cares.

Quake soon became the de facto benchmark for the consumer graphics card industry. Says David Kirk, chief scientist of NVIDIA, a leading graphics processor manufacturer in Santa Clara, Calif., "Id Software's games always push the envelope."

Quake II improved on its predecessor by taking advantage of hardware acceleration that might be present in a PC, allowing much of the work of rendering 3-D scenes to be moved from the CPU to the video card. Quake III, released in December 1999, went a step further and became the first high-profile game to require hardware acceleration, much as Id had been willing to burn its boats in 1990 by insisting on EGA over CGA with Commander Keen.

Carmack himself feels that his real innovations peaked with Quake in 1996. Everything since, he says, is essentially refining a theme. Return to Castle Wolfenstein, in fact, was based on the Quake III engine, with much of the level and game logic development work being done by an outside company.

"There were critical points in the evolution of this stuff," Carmack says, "getting into first person at all, then getting into arbitrary 3-D, and then getting into hardware acceleration....But the critical goals have been met. There's still infinite refinement that we can do on all these different things, but...we can build an arbitrary representational world at some level of fidelity. We can be improving our fidelity and our special effects and all that. But we have the fundamental tools necessary to be doing games that are a simulation of the world."



Disclaimer:  This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at externally internally within TechMahindra.


State of the economy

The main paragraph from a newspaper article about the state of the Indian economy today.








Wednesday, September 20, 2017

NEET problems - is just a repetition

Trouble is still brewing in the state of Tamil Nadu, this time due to NEET. This has now taken center stage over the longer running political turmoil. The NEET issue reminds me on the long standing ‘education-reformation’ problem going on the country. It started almost decades back, when I was still in school. Basically it has to do with truth that education is a fundamental right, but it is also a business. The government wants to teach everyone in the country for free, but it does not have the funds and power to do so.

The situation today is that only two groups of students can afford education – the extremely rich, or the extremely intelligent.

This leaves out a significant number of students out, mainly because the best education is always pricey. Even the gifted students need access to quality facilities and faculty to improve their scores. So that they too can move towards professional higher education, and better way of life after that.

In the professional education sector, there have two kinds of colleges for some time, the ones started and run by government, and the private institutions. Most of today’s reputed institutions were originally started by the government decades back, and have now earned a reputation of dispersing affordable , high quality education. The fees is heavily subsidized , students only need to worry about minor lodging fees. And students from the SC/ST reservation list received stipends. For this reason, such ‘government colleges’ have always attracted the most studios  or talented students. There are so few seats available, that only very few of the millions of students passing of high school are able to get into them. The rest have to settle for some kind of graduation, which may or may not fetch them a job. Or, if they can afford it, try and get into one of those thousands of ‘self financing’ colleges. But noting comes free at these colleges, the fees itself is extremely high.

The problem is compounded by the fact that although there is a central board of education in the country, most students study a course prescribed by the state they live in . This lead to establishing a central entrance test, where students from different education boards will have to attend and write a common test, and be graded solely on that result. The top government ‘free’ seats would go to the ones scoring the highest percentile, and the rest will just have to settle for one of the self-financed colleges.

As you can see, this whole system does not solve any problem at all. The problem is still present, but it has now moved from the high school marks to the entrance marks.  Only the smartest get in.

I was going through the initial phases of the government’s pathetic education reform during my college days. The government was largely deaf. And then, a student jumped to her death. This caused students to erupt in anger. Remember the Rajini suicide ?


On July 22, 2004, Rajani jumped to death from the terrace of the building which houses the office of Commissioner of Entrance Examinations, Government of Kerala. In the weeks that followed her death, Kerala had been in flames. In cities and towns, enraged students have gone on the rampage, attacking ministers, setting ablaze scores of government vehicles and ransacking offices. Violent mobs also targeted the banks since the Indian Overseas Bank had allegedly refused Anand the educational loan. The incident had brought to the fore the deeper malaise in the state's education system, particularly the recent mushrooming of self-financing professional colleges, of which Rajani was a student. These colleges had rejected the Government's plea to allocate 50 per cent of the seats to merit category. The Opposition points out that the exorbitant fees charged at these institutions make professional education impossible for the economically backward sections of the society. Though the statewide protests are led by students' unions and youth wings affiliated to the Left Democratic Front (LDF), even pro-BJP organizations like the ABVP have been actively involved. The embarrassment for the Congress-led ruling United Democratic Front was complete when its Kerala Students' Union joined the agitation.

More than a decade later, I am still reading about student suicides. About high fees. About the debate between strict and liberal grades.

NEET won’t solve anything. It is another revision and repetition of the same problem.

PS:I was lucky enough to get one of the last remaining free government seats at a dilapidated aided college. I got by somehow. In the later years I have realized that nothing I learned in college ever proved useful to me in my career.

Tuesday, September 12, 2017

Bollywood is stuck

It has occurred to me over the years, I now watch less and less bollywood movies. The bollywood flavor of love stories is now irritating  and tediously repetitive. Its nice to see others come to the same point.

Watch how CBE breaks down the typical bollywood movie, and why its always just a fantasy, never realistic.

Thursday, September 7, 2017

Got it. Finally.

Today is an important milestone for us. The date has always been special. But now its also the date we cleared a milestone. Slowly , very slowly, things we put in action almost a year back are falling in place. It could have been earlier, we could have achieve it sooner. But, you There will always be hindrances. And better late than never, right ?

We are moving. We are leaving this company. This country. And moving for good. Abroad. Today we were granted our 189 independent visa for Australia.

After dreaming about and planning about this day for months, we have climbed one more hurdle. And something tells me there will be more hurdles  down the path for us.

We’ll take it one at a time.

Puttakke putaakkee karimeen puttakkeyy, we are going there ! All our dreams will come true. We will make them come true.

Friday, August 25, 2017

Hat trick for Indian judiciary !

Moments after a CBI court convicted Dera Sacha Sauda chief Ram Rahim of rape, the Indian judiciary finds itself the unlikely hero of events of the past week.


In the span of a week, the judiciary has delivered three historic verdicts in three cases that had the nation glued to their television case.


On August 22, the Supreme Court struck down instant triple talaq practiced among Muslims. On August 24, the apex court once again emerged as the star, upholding the right to privacy as a fundamental right. To cap off the week, a CBI court in Panchkula convicted Ram Rahim in the rape case, despite the fact that 200,000 of his followers had laid siege to the city.




Tuesday, August 22, 2017

What is the verdict ?







Monday, July 31, 2017

I hate hospitals

I hate hospitals.


I hate the smells of those disinfectants, I hate the sight of white labcoats. And the sound of those ambulances. But mostly I hate the inefficient design of hospitals. Yes, in India, the hospitals are literally designed to hurt you.


For the past one week, I have been stuck at not one, but two different hospitals. There was nothing wrong with me , by the way. I was caring for a patient. But the way I struggled to run from ward to pharmacy to scanning to reports to ward reminded me why I hated hospitals in the first place.


These places are never designed movement, but to maximize it. Facilities which logically should have been next to each other are placed levels apart. And there are a limited number of lifts for the ailing patients. And I haven’t understood why the pharmacy never has all the medicines in stock, one has to frequently get some of the medicines from outside the hospital.


I think the main problem is the same one with every other public space in India, they are never designed for the actual load. The number of users/visitors/patients in the hospital vastly outrun the number of people who can comfortably use the system. Everywhere I went, I was waiting in queue. Things are built to suit the management’s convenience rather than that of the patients.


Besides, they are also for-profit institutions.




Monday, July 17, 2017

Media wars

One can’t turn on the television nowadays. Its full of crap. Nonsense. News, but with the heat turned up full, it hurts. I don’t watch own or watch the TV anymore, but they do have TVs running in public areas. And everytime I watch it, I am reminded why I decided to dump the idiot box.


Turns out , there are some new players in the ever growing media wars. New news channels. And one only need to watch 5 seconds of their coverage to understand that they are no worse than the drug peddler on the street. The screen is full of bold texts, shouting out the same thing over and over again.  The same footage is replayed until it gets cemented in your head.  They quote a lot of people, he said, she said..but very little fact. In fact, they have reported incorrect or outdated news many times in the past.




And now I see the same thing happening down south, in the coverage of a trending news topic. The case is not yet in court, but the media have already announced their verdict.



If this is the fact, then the fiction part is even worse.  TV serials and reality programming have flooded channels, with multiple repeats throughout the day. I hear the focus has now shifted from saas-bahu serials to ghosts and black magic !


And while things are a little better on youtube, lots of new programming and original content there, things are going darker there too. The same media companies have taken to online video channels and spread there.



I long for the day when a calm news reader simply read the day’s news with minimal expression. I think they still do that on doordarshan, have to check. I long for those simple television programming from the 90s, where really talented artistes came together to tell a story. Sigh !


Tuesday, July 11, 2017

Whats our Future ?


What will our future be like ? I don’t mean a few years more, I meant long time future. About a million years from now. Or even a billion years from now. What is in store for us. Turns out, no religion has an answer to this question. But science does. There is a whole wikipedia article on this topic here.

From our starting point in Africa, we've managed to colonize theentire world at this point and even reached as far away as the moon. The Bering land bridge that once connected Asia and North America with one another has long since been submerged beneath the ocean, so if humanity exists for

another billion years, then what additional changes or events can we reasonably expect to happen? Well, starting at about ten thousand years into the future, we will encounter the year 10,000 problem. Software that

encodes the AD calendar year as a four digit decimal will no longer be able to encode dates starting at 10,000 AD. Iit will be a real Y10K, and in addition, if current trends of globalization continues, then human genetic variation

will no longer be regionalised by this point, meaning that all human genetic traits like skin color and hair color will be evenly distributed across the world. *Seizure Warning* In 20,000 years from now future languages will only contain one out of

every 100 core vocabulary words of their present-day counterparts. Essentially no modern language will be recognizable by this point. In 50,000 years earth will enter into another glacial period regardless

of current global warming effects. Niagara Falls will have completely eroded away into Lake Erie and ceased to exist and due to glacial rebound and erosion. The many lakes of the Canadian Shield will also cease to exist. Also, one

full day on Earth will have increased by one full second requiring a leap second to be added to every day. In 100,000 years for now the stars and constellations visible from Earth will be completely different than they are

from today. Also, this is the estimated amount of time that it would take to fully terraform Mars into a habitable planet similar to the Earth. In 250,000 years, the Lo-ihi Volcano will rise above the surface and form a new island

in the Hawaiian island chain. In five hundred thousand years from now Earth will have likely been struck by an asteroid measuring one kilometer in diameter unless humanity

artificially prevents it. Additionally, the Badlands National Park in South Dakota will have completely eroded away by this point. In 950,000 years the meteor crater in Arizona, which is considered the best preserved meteorite

impact crater on Earth will have completely eroded away. in 1 million years Earth will have likely experienced a super volcanic eruption large enough to erupt 3200 cubic kilometers of ash, an event similar to the Toba super eruption

70,000 years ago that almost made humanity extinct. In addition, the star Betelgeuse will have exploded into a supernova by this point and the explosion will be easily visible from Earth even during the daytime. In 2,000,000

years, the Grand Canyon will have eroded even more significantly; slightly deepening, but it will mostly widen out into a large Valley. If humanity has colonized two different planets in the solar system and the universe by this

point and the populations on each planet have remained separate from one another, then humanity will have likely evolved into various different species at this point. These different human species will be adapted to their different planets

and may not be aware of the other human species located in the rest of the universe. in 10,000,000 years a large part of East Africa will split off from the rest of the continent. A new ocean basin will form between the two sides,

and Africa will be divided into two separate land masses. In 50,000,000 years Phobos, a moon of Mars, will collide with the planet causing massive destruction there. Back on Earth, the remaining part of Africa will collide with Eurasia and

close off the Mediterranean Sea forever. A new mountain range similar in size to the Himalayas will form between the now connected land masses and may possibly produce a mountain that is higher than Mount Everest. In 60,000,000 years the

Canadian Rockies will have completely eroded into a flat plain. In 80,000,000 years from now all of the Hawaiian Islands will have sunk back beneath the ocean, and in 100,000,000 years the earth will have likely been struck by an

asteroid similar to the event that wiped out the dinosaurs 66,000,000 years ago, assuming of course that it isn't artificially prevented. In addition, at this point in the future the rings around the planet Saturn

will no longer exist. In 240,000,000 years Earth will have finally completed one full orbit around the galactic center from its current position. In 250,000,000 years all of the continents on Earth will have fused together to form a

supercontinent similar to Pangaea. A possible name for this continent is Pangaea Ultima . Then, in 400,000,000 - 500,000,000 years the supercontinent will break apart once

again. In 500,000,000 - 600,000,000 years a deadly gamma ray burst will likely occur within six-and-a-half thousand light years of Earth. If conditions are right, or wrong, if you prefer, the burst could strike Earth and severely damage the

ozone layer which would cause a mass extinction event. in 600,000,000 years the moon will have moved far enough away from Earth that total solar eclipses will no longer be possible after this date. In addition, the sun's increasing

luminosity will cause severe effects on Earth. Plate tectonic movements will stop at this point, and carbon dioxide levels in the atmosphere will decrease dramatically. C3 photosynthesis will no longer be possible at this point and 99%

of current plant life on Earth will die. In 800,000,000 years CO2 levels will have continued to fall to the point where C4 photosynthesis will no longer be possible. Free oxygen and ozone disappear from the atmosphere, and

all complex life on Earth will die. Finally in 1 billion years the sun's luminosity will have increased 10% from its current state. The surface temperature on Earth will rise to a sweltering 47 degrees Celsius on average.

The atmosphere will turn into a moist greenhouse, and the world's oceans will evaporate away. Pockets of liquid water may still exist at the Earth's poles, however, which means that they will probably become the last bastion of life on our planet.

People of tomorrow are going to look back and laugh at us, for all those wars and fights we had, and all the stupid decisions leaders have taken for us. Lets hope we can leave the world a little better than the one we inherited, so our ancestors a few thousand years from now, would thank us.

Tuesday, July 4, 2017

Remember Winamp ?


Today I came across a little older article, documenting the demise of Winamp. Remember Winamp ? During the last decade , it was the default MP3 player installed on every Windows machine ! With billions of skins , visualizations and other add-ons, it made every other MP3 player look lame. There were skinning tools, which could be used to create custom skins from photos. There were even tools to automate and control Winamp via bluetooth connections. And then, Web 2.0 happened. And portable mp3 players (cd/usb). And at the end..affordable smartphones. People no longer turn on their desktops to listen to music. They just get it online, and stream it via modern HTML5 browsers. Can’t believe its been more than15 years since Winamp came out.


It was fun going back in time, and sad reading about how mis-management tool Winamp down.


Puttaakee puttakkee, karimeen puttaakkee


I miss the 90s again. Those were fun times.

Thursday, June 29, 2017

Cow Analytics



This media group has compiled some statistics about the rising cow-related violence in the country.


See for your selves.




Monday, June 26, 2017

Snapdeal’s failure

Interesting read. Saw this article condensing Snapdeal’s fall into failure.I’ve had bad experience with them. Turns out they never had any focus.

Wednesday, June 21, 2017

Wow , thats disappointing

The 40-Year Old Mystery of the "Wow!" Signal Was Just Solved. Background: In 1977, the sound of extraterrestrials was heard by human ears for the first time — or so people at the time thought. The Wow! Signal was detected by astronomer Jerry Ehman using Ohio State University’s Big Ear radio telescope. It is a radio signal detector that, at the time, was pointed at a group of stars called Chi Sagittarii in the constellation Sagittarius.

When scanning the skies around the stars, Ehman captured a 72 second burst of radio waves: He circled the reading and wrote “Wow!: next to it, hence the signal’s name. Over the last 40 years, the signal has been cited as evidence that we are not alone in the galaxy. Experts and laypeople alike believed that, finally, we had evidence of alien life.

For a very long time , this was the strongest candidate we had as proof of extraterrestrial intelligence. It could not be explained any other way. However, Professor Antonio Paris, of St Petersburg College, has now discovered the explanation: A pair of comets. The work was published in the Journal of the Washington Academy of Sciences.

These comets, known as  266P/Christensen and 335P/Gibbs, have clouds of hydrogen gas millions of kilometers in diameter surrounding them. The Wow! Signal was detected at 1420MHz, which is the radio frequency hydrogen naturally emits. Notably, the team has verified that the comets were within the vicinity at the time, and they report that the radio signals from 266/P Christensen matched those from the Wow! signal.

Friday, June 9, 2017

India’s 4G speeds a third of global average

How fast is 4G again ?

India ranks at 74 in a list of 75 countries ranked according to average 4G speed

Regardless of the flood of deep discounts and attractive data packages telecom operators have been offering in recent months to retain their subscriber base, 4G internet speed in India, a crucial parameter of user experience, continues to be dismal, a new survey has found.

At an average data speed of 5.14 Mbps, 4G speed in India ranks three time below the global average and just a notch above the average global 3G speed. Ranked at 74 among 75 countries surveyed, India’s 4G speed was found much slower compared to other countries, including Pakistan and Sri Lanka and faster than only Costa Rica which ranks at the bottom.

According to the Open Signal report, Pakistan recorded average data speeds of 11.71 Mbps. The countries on top of 4G internet speeds include Singapore and South Korea, with download speeds of about 40 Mbps.

In Costa Rica and India, the drop in average data speeds was attributed to the abrupt increase in number of 4G users in the country.

The report also ranks countries in order of 4G network availability in the world and India fared better in this particular list, making it to the 15th position, globally. Between September 2016 and March 2017, there has been an 82 percent surge in 4G internet availability, largely on the back of Reliance Jio's entry into the telecom sector last year.

India has some of the slowest LTE speeds in the world, the report said. In fact, the report goes on to underline a pattern of drop in 4G network speeds in the country, recording a fall of over one per cent over the past six months.

These findings come in stark contrast to the figures released by the Telecom Regulatory Authority of India (Trai). The telecom regulator had earlier said that Reliance Jio topped the ch

art in 4G network speed for the month of April with an all-time high download speed of 19.12 megabit per second.

4G Speed Comparison
051015202530354045504G Speed (Mbps)SingaporeSouth KoreaHungaryNorwayNetherlandsLuxembourgCroatiaNew ZealandBulgariaAustraliaDenmarkLithuaniaCanadaSerbiaBelgiumItalySpainUnited Arab EmiratesAustriaLatviaSlovakiaTaiwanGreeceSwedenBruneiRomaniaTurkeySwitzerlandFinlandLebanonJapanCzech RepublicEcuadorFranceOmanDominican RepublicIrelandEstoniaUnited KingdomMexicoSloveniaPortugalPeruTunisiaGermanySouth AfricaBrazilColombiaChileIsraelPanamaPolandMoroccoQatarKazakhstanRussian FederationHong KongJordanUnited States of AmericaGeorgiaMalaysiaGuatemalaKuwaitCambodiaThailandPakistanArgentinaBahrainSri LankaIranPhilippinesSaudi ArabiaIndonesiaIndiaCosta Rica