LATEST NEWS

The Future of Telehealth

By: Keith Hearne
“Imagine a world where no matter who you are or where you are, you can get the health care you need when you need it.”
- Quote from opening page of the Office of Advancement of Telehealth website An appealing idea; the notion that if you are in a remote place and are in need of medical assistance or need access to medical knowledge that you can get it when you need it, no matter where you are. Fortunately telehealth is already more then a notion. Indeed in some places it is alive and well and in use.
So what is telehealth? Telehealth is the use of electronic information and telecommunications technologies to support long-distance clinical health care, patient and professional health-related education, public health and health administration. And with the latest movement in mobile communications to the third generation (3G) technology; where Internet and mobile technology are converging, it’s possible that telehealth just may be able to offer that little bit more.
First generation (1G) mobile communications systems started in the early to mid 1980s, offering simple wireless voice services based on analog technology. These systems provided low quality voice services, were very limited in capacity and did not extend across geographic areas. Our current mobile phone services work on digital second generation (2G) systems, which were developed in Europe (mainly Global System for Mobile Communication (GSM)) and the U.S. to provide better voice quality, higher capacity, global roaming capability as well as lower power consumption. 2G systems also offer support for services like short messaging (SMS).
However, the low transmission (bit) rate of 2G systems (9.6kbps for GSM) cannot meet demands for new and faster non-voice services on the move. 3G systems aim to solve the problems encountered with 2G, by promising global roaming across 3G standards, as well as support for multimedia applications by using Universal Mobile Telecommunications System (UMTS) technology. UMTS technology will also offer increased bandwidth and better Quality of Service (QoS). With the UMTS technology, telehealth will take on a whole new life and offer much more efficient medical services.
Telecommunications and Internet services currently facilitate telehealth by allowing the transfer of patient and other related medical data between health care professionals. However consultation and treatment are the major areas of health care at the moment. In the coming years we will see our current mobile communications systems extended with UMTS technologies, and as a result we should see consultation and treatment being provided virtually in any location where access to a mobile communications system is available. This kind of technology should offer dramatic improvements to areas like emergency and home treatment, as well as routine check-ups. This will lead to considerable cost savings to different stakeholders in the healthcare system. For hospitals it will mean reduced cost and bed stay; GPs will have greater reach of consultancy services; and most importantly for the patients, telehealth will mean independence and fewer doctors’ visits.
Let’s take a couple of scenarios where UMTS technology and teleheath will be able to offer us improved medical assistance.
Patient Monitoring: We are already seeing new integrated technologies being introduced to the mobile market place, with the latest mobile phones offering a still camera, allowing you to take still pictures and send these to friends. With improved integration we will see mobile devices that offer live feed video streaming via inbuilt cameras. With this in mind patient monitoring could be easily implemented by a GP. With monitoring equipment being set up at the patient’s home or on the move, the GP can keep an eye on the patient’s progress without having to be there.
Emergency Care: Imagine a person was in a car accident and he or she was physically uninjured but a fellow passenger had some trauma and injury. Miles from the nearest hospital or doctor, the person, however, has a mobile device with phone and live feed capabilities. She can ring the doctor who can provide invaluable on the spot advice for the person to administer, possibly saving the victim’s life. This can be extended to doctors advising nursing staff or ambulance staff while en route to hospital.
Consultations and Call Out: It’s 3:00am and you’re woken by your 3 year old daughter who is crying, feverish, coughing heavily and complaining of an upset stomach and you don’t know what to do. It’s a fairly common situation where you have to call a doctor out to your house late at night or early in the morning. But what if the advice needed could be administered over your video/phone? The doctor can see the patient through a live stream and determine whether the situation warrants a house call and if not can give detailed information of treatment over the phone. Once the consultation is at an end you can even pay for the treatment using your mobile UMTS device.
All the technologies needed for mobile health care or telemedicine are already in place. With the expansion of UMTS and 3G technologies in the coming years, we will see these types of services extended and greatly improved. UMTS technology will also see the ability to pay for access to services through your mobile. This ability to handle payments will be a catalyst to bring healthcare services to the mobile world.
These are just a few of the possibilities of telehealth that we may see come into operation with the introduction of UMTS technology. One thing is clear though that the advancement of telehealth will more than likely come into all of our lives in the coming years, making our medical systems more efficient and more patient friendly and will provide a vehicle to improve healthcare service delivery.
See also:

http://telehealth.hrsa.gov/

http://telehealth.net/

http://www.health.nsw.gov.au/

pmd/telehealth/about/about.htm

George Boole – An unsung hero

By: Rob O’Connor

Unless someone has been living on Mars for the past 15 years, they would know that Ireland has shown phenomenal economic growth in the recent past. This has been due to, among other things, government subsidies in higher education, increased foreign investment and the cultivation of domestic business. At the forefront of Ireland’s economic revolution was its strong position in the Information Technology market, so much so that we were known as “the silicon valley of Europe.” However, not many people are aware that Ireland’s presence on the technology map is not a new development; in fact every computer circuit in the world is based on mathematical theories developed by a mathematics professor in Cork in the mid-19th century.

George Boole was born in the English industrial town of Lincoln in 1815. His parents were members of the lower working class and in an age so driven by class-structure and social standings, it is quite remarkable that Boole eventually achieved what he did. As a young boy he showed an unusual intelligence and curiosity, which was nurtured by his father who passed on his knowledge and love of mathematics to his son. As he grew older, his father arranged for a friend to teach the boy Latin and by the time he reached adolescence, Boole was fluent in German, Italian and French. When he turned 16 he began working as an assistant teacher and at the age of 20 (after deciding not to enter Church service) he opened his own school.

As well as running his school, Boole continued his private study of mathematics. He became friendly with Duncan Gregory, the editor of the recently founded Cambridge Mathematical Journal who encouraged Boole to undertake a formal degree. However Boole needed the income he earned from his school to support his parents, so a penniless student life was not viable. He carried on his research and at ripe old age of 24 published his first scientific paper – Researches on the Theory of Analytical Transformations – in his friend’s periodical. Over the next ten years he produced a steady stream of papers that gained the attention of intellectual groups and in 1844 he was awarded the Royal Society Medal for his contributions to mathematical analysis. In 1849, he was offered a position on the faculty of Queen’s College Ireland (later to become University College Cork) where he remained for the rest of his life.
As a member of a respected academic institution, Boole found it easier to delve into research and concentrated on refining his methods. He set himself the task of developing a mathematical description for natural-language logical arguments, which could then be manipulated and solved. He came up with a type of linguistic algebra, the three most basic operations of which were (and still are) AND, OR and NOT. With these three functions, he discovered it was possible to perform comparisons or simple mathematical operations. His findings were published in his 1854 work An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities where he gave a detailed description of his binary approach of only processing two objects at any one time, always yielding a Yes/No, On/Off or 1/0 result. As most people know, computers are binary machines and since everything is reduced down to 1s and 0s, all mathematical operations are based on Boole’s work.
Boole’s laws are incredibly simple – but that’s what’s so good about them! They may seem obvious now, but 150 years ago this type of thinking was unheard of. Boole’s work is well-documented elsewhere and any interested individuals can find very good explanations on the Internet (e.g www.kerryr.net) or in mathematical logic textbooks.
Sadly Boole’s career came to an untimely end, when he died at the early age of 49. According to reports, he walked a distance of two miles in the pouring rain from his home to Queen’s College and delivered his final lecture, soaking wet. He developed a feverish cold and soon his lungs became infected. His wife Mary (who was the niece of the explorer Sir George Everest) believed that the remedy for her husband’s illness should resemble the cause. She put him to bed and threw buckets of water over him. He died a few days later.

Mind your own business – or someone else will

By: Jonathan Brazil
Computers are everywhere. It’s a modern fact that cannot be argued with. Convenience stores, supermarkets and even pubs all have computer-controlled cash registers. They have been introduced to make our everyday lives more efficient. Or have they?
Most regular shoppers are loyal to a particular store where they do their weekly shopping and other bits and pieces. As a result they will at some stage have been offered a loyalty scheme option such as a clubcard. Every time they make a purchase they present the card and get rewarded in some way for the purchase that they have made.
However, it doesn’t stop there. All the details about your purchases have now been stored in your profile somewhere on the store’s server. Remember when you signed up for the card, you gave your details to the store? Well, you did and now that you have started to use the card a database is storing your purchases and building up a character profile of your shopping habits.
Every once in a while you will receive a personalised booklet of vouchers offering you discounts on various items. Isn’t it strange how these random discounts often have a lot in common with your regular purchases combined with some impulse items? I sometimes purchase items online such as DVDs, books, CDs, etc. One of the requirements now is that you specify an e-mail address so that you can be notified that your order has been received and also to update you on the status of the order. Now every time there is a new release, special offer or other such event it will be emailed to
me. Also every time you log into your account on that site you will be given a list of titles that might be of interest to you based on past purchases. Every time you buy something you are being tracked and you are saving these companies thousands in marketing surveys. Of course you are being rewarded for this in the way of discounts and such.
If you’re not happy with the fact that so many companies might be building up a highly detailed profile of your shopping habits or general interests you can comfort yourself with the thought that somewhere in the small print there is probably a legal notice stating that your information will not be shared with any other outside company. But what happens if the company’s policy changes on this matter some time down the road? Suddenly a selection of companies know what you have been buying for the last number of years and how often you have been buying it and even how much you are prepared to pay for it if you shop in multiple stores.
We as consumers have signed away a part of our privacy in exchange for small discounts on our purchases. The question we have to ask ourselves is: Are we fuelling a market, which is already one of the highest priced in Europe, with information that reflects our inclination to pay more for certain items? Personally I’d rather keep what I eat for breakfast a secret.

Mobile Communications Advancement

By: Keith Hearne
Today mobile communication is virtually everywhere. If you do not possess a mobile phone chances are out of the next 10 people you meet today, 9 out of 10 will. The mobile communications market is undergoing a great deal of change presently, and this is set to continue. Over the next few years people will continue to see new and improved services, devices as well as improved deliverance being introduced. So where have we come from, and where have we come to?
A great deal of the advancement and creation of the mobile or wireless sector of the global market has to be attributed to the big conglomerate companies like Motorola. From the time man first landed on the moon right up to the current movements into the next generation(2.5G and 3G) of mobile services and devices, Motorola has been there every step of the way. When Neil Armstrong’s first words were relayed from the moon to the earth, they were done so through a Motorola radio transponder that provides two-way voice and television signal transmission. Then right through the 1980′s Motorola worked hard at providing cellular technology and in the late 80′s introduced personal cellular telephones into the market place. In fact the 1980s was the decade of the boom for wireless communication. During this decade pagers grew in popularity, the first cellular car-phone appeared followed closely by the first portable mobile phone in the world, and then great improvements were made in both the weight and size of these devices in the years that followed.
One of the most ambitious and significant advancements in the mobile communications arena at the time was the Iridium System (www.iridium.com), a concept for global personal communications. Iridium involved 77 Low Earth Orbit(LEO) satellites operated by Boeing, and twelve ground station gateways that link the Iridium satellite constellation to terrestrial wireless and landline public telephone networks, providing one hundred percent global coverage. Research and development work on the Iridium system started in 1987, pioneered by Motorola engineers Ray Leopold, Ken Peterson, and Bary Bertiger. Iridium service started on November 1, 1998 by Iridium LLC (founded in 1991 and having invested about $7 billion), but later went into liquidation. With a $3,000 price for an Iridium phone, plus international calling rates of up to $7 a minute, the company brought in only 15,000 customers before going bust, and was later taken over by Iridium Satellite LLC who acquired Iridium through the bankruptcy courts and resumed service on March 28, 2001. Iridium Satellite LLC currently provide services to the United States Department of Defense.
In the last five years, we have all seen the progression of mobile phones as they practically took over the planet. A few years ago a mobile phone was a commodity, now it’s almost, and indeed to some people is a necessity. New gadgets are being introduced into the market on a daily basis; the market is swamped with wireless and
mobile devices like the new GPRS (General Packet Radio Service) phones, which allow customers to experience fast and “always-on” access to the internet while only being charged for what they actually download and not the time spent browsing. Also we have devices like O2′s new XDA, combining a mobile phone with a pocket PC.
The apparent sacrifice that we seem to have made for this technology is the presence of base stations and ugly masts on the horizons, allowing signal coverage to the more remote places in our countries. So what next? Will the new wave of new fangled devices that offer us the world mean we will be overrun with bigger and uglier masts in order to provide us with such services? It may not be all that bad. While there have been advances in technologies provided, there is also a lot of research, development and resources going into how these technologies are deployed. We may see a decline in the use of the masts as the years go by.
On the week of July 24th this year, a US company ‘Sky Tower’, said it had successfully performed a series of tests in Hawaii of its new technology, a communications airplane called Pathfinder-Plus. This airplane is unmanned, solar-powered and with advancements in battery technology could stay airborne for six-months at a time while operating above the weather and air traffic at 65,000 feet. It is envisaged that by 2005 people might be receiving mobile phone services, broadband connections and even digital TV from these or similar solar-powered airplanes. The advantages of which would mean, more coverage and more efficient delivery due to their high vantage and consumption of less than 1/10,000 the power used by a typical terrestrial broadcast transmitter that has to overcome buildings, trees and other obstructions to cover the same area. The aesthetics of such a delivery would be a great improvement on today’s terrestrial tower build-outs and backhaul also.
So the truth of it is that we have come a long way in the last few decades in terms of mobile communications advancement. However with the big hitters like NTT DoCoMo, NEC, Toshiba and many more pumping colossal amounts of time, money, resources and expenditure into such projects as the Pathfinder-Plus, its clear that we’ve only seeing the tip of the iceberg so far.

Open Source Software

By: Shane Dempsey
By now, most Internet users have heard something about ‘open source’ software or the ‘open source’ initiative. For many, the idea that some programmers are willing give away their programs for free to anyone (usually via the Internet) seems a bit strange at first. Non-technical users often wonder about the meaning of these words and how members of the ‘open source community’ can religiously revere them. Many people already use open source software. The Linux operating system (www.linux.org) is perhaps the most famous and vocal open source project. Web users utilise open source software whether they know it or not. They download web pages from web servers and the most popular web server is developed by the Apache open source project (www.apache.org). Similarly, a large proportion of email on the Internet is handled by sendmail (www.sendmail.org), an open source mailing system. Therefore, in many ways, the Internet itself is built on a series of foundation blocks that are open source software projects.
Most software is developed in high level languages (e.g. C++, C, Delphi) and then converted into a binary version for distribution and sale. In order to make changes to the software, programmers need access back to the original source code in the high level language. In traditional software development, the company who developed the software jealously guards this source code. They then release updates fixing bugs (errors) or adding new features, often for an additional upgrade fee. Open source software is very different. Any programmer can download the source code and change it. Companies who work in this area make money by selling a service based around the software, rather than the software itself. However the concept is a bit more complex that just freely available source code. The authoritative definition of open source can be found here, it covers these points:
* Free distribution – The source code or software itself may be included in another project and freely distributed without restriction or royalty payments to the authors.
* Source Code – The complete source code must be openly, freely and publicly available.
* Derived works – The open source license must allow derived works. These may be distributed under the same terms as the original software.
* Integrity of the author’s source code – The author may restrict distribution of modified code to publicly available ‘patches’ or may require a different name or version number for the modified code.
* No discrimination again persons or groups – “The license may not discriminate against any person or group of persons”
* No restriction of field of endeavour – The code may be used by any project regardless of the field of endeavour.
* Distribution of license – The rights with which the software is distributed apply after redistribution to additional parties without requirement for an execution of the license agreement for those parties.
* License must not be specific to a product – The license should not depend on the software being part of another software program. For example, an open source part of a commercial project may be extracted and used under the terms of its original open source license.
* The license must not restrict other software – The license should place no restriction on other software that is distributed with the open source software. For example it should not mandate that this software may only be open source
These rules may seem complicated and quite abstract but it is worth remembering that a software license is legally binding. As such, the license needs to provide extensive coverage of legal issues regarding usage of the licensed software. Not all open source licenses adhere strictly to the definition outlined above. A brief, perhaps flippant, look at some of the commercial, closed-source licenses is instructive. The open source community’s wary attitude towards Microsoft’s End User License Agreements EULA’s seems justified. Software in general is sold with an ‘as is’ warranty, meaning there is none and no obligation to provide support. Anyone who has installed a Microsoft product has effectively agreed to a contract with Microsoft that contains lines like the following:
8. DISCLAIMER OF WARRANTIES. THE SOFTWARE IS DEEMED ACCEPTED BY RECIPIENT. THE SOFTWARE CONTAINS PRE-RELEASE SOFTWARE AND MAY BE CHANGED SUBSTANTIALLY BEFORE COMMERCIAL RELEASE. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, MICROSOFT AND ITS SUPPLIERS PROVIDE THE SOFTWARE AND ANY (IF ANY) SUPPORT SERVICES RELATED TO THE SOFTWARE (“SUPPORT SERVICES”) AS IS AND WITH ALL FAULTS, AND HEREBY DISCLAIM WITH RESPECT TO THE SOFTWARE AND SUPPORT SERVICES ALL WARRANTIES AND CONDITIONS, WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, ANY (IF ANY) WARRANTIES, DUTIES OR CONDITIONS OF OR RELATED TO: MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, LACK OF VIRUSES, ACCURACY OR COMPLETENESS OF RESPONSES, RESULTS, WORKMANLIKE EFFORT AND LACK OF NEGLIGENCE. ALSO, THERE IS NO WARRANTY, DUTY OR CONDITION OF TITLE, QUIET ENJOYMENT, QUIET POSSESSION, AND CORRESPONDENCE TO DESCRIPTION OR NON-INFRINGEMENT. THE ENTIRE RISK AS TO THE QUALITY, OR ARISING OUT OF THE USE OR PERFORMANCE OF THE SOFTWARE AND ANY SUPPORT SERVICES, REMAINS WITH RECIPIENT.
Extract from an actual MS EULA
This is clearly far from the ideal from a consumer’s point of view. While software development is complex, the money the software vendor receives provides the user with no guarantees as to the quality of the product. This is one of the reasons that open source software is becoming popular despite common flaws such as:
* Lack of documentation or poor documentation;
* High level of technical proficiency required for successful usage;
* Very little technical support.
For the experienced user these problems are lessened and the cost saved makes the hassle seem worthwhile.
So you may already be using open source software indirectly when you access the Internet, and over time this model may become more important in desktop applications and business applications.

Page 200 of 213« First...102030...198199200201202...210...Last »