LATEST NEWS

Computer Viruses Why Do They Infect?

By: Jonathan Brazil
There are many definitions of a virus but put simply a virus is just a set of malicious instructions telling a computer to do something nasty. This behaviour can be trivial, resulting in the unwanted deletion of personal files. Yes, I did say trivial, when you consider the comparable catastrophe of your personal files being sent to recipients via email and your PC rendered useless to stop you discovering the misdemeanour. Personal files should be backed up anyway to a separate medium such as floppy, Zip or CD; it does not take a virus to destroy your personal data!
There are several reasons for the rapid spread of viruses in recent times. Popular operating systems that have captured a large market share have done so because of sophisticated features allowing system tasks to be automated via a script of instructions. This gives the system a nicer feel and appears more powerful to the experienced user but is also the Achilles heel of the system. If the operating system knows how to process these instructions there is nothing stopping somebody from writing a script of instructions to perform some less graceful activity.
Viruses spread in a number of ways; e-mail viruses spread by sending a copy of themselves to every e-mail address in your contacts directory thus prolonging the lifecycle of the virus by potentially infecting more users. However, the
biggest cause of virus dispersion is lack of vigilance on the users behalf. People open files that are e-mailed from people they don’t know; these files could do anything. You might receive an e-mail virus from someone you know but despite the impersonal facade of e-mail, people still add personal touches. If the e-mail has no greeting, no meaningful content or an attachment name that makes no sense then don’t open it. If it follows the aforementioned pattern and you weren’t expecting it then it probably wasn’t for you and you don’t want to see what it can do. Vigilance is the only sure way to protect your system, don’t be tempeted by clicking into the unknown.
On an interesting note, you should be aware that many emails warning of viruses are in fact hoaxes. They waste time, energy and computing resources by spreading rumours. One of the first of these hoaxes was the notorious “Good Times” hoax, started in December 1994 and still going around in various forms, saying not to read any email with “Good Times” as the subject. As a good rule of thumb, check an authoritative website to see if you are dealing with a hoax, before mailing virus warnings to all you friends and colleagues.
Viruses have plagued mankind since the dawn of time and it seems only fitting that, as we complete our transition into the computer era, viruses should remain the scourge of the general public. The common cold is no longer the only infectious thing we have to worry about.
There are many virus protection programs. Every computer should really have one installed and have the list of viruses regularly updated (this can be done automatically over the Internet for most programs).

Time and distance based billing models are dead!

By: Conor Ryan
To survive in the rapidly expanding services market, service providers have to overhaul their traditional billing and accounting models. The flat rate, monthly subscription, unlimited usage for a fixed price is a ‘going-out-of-business’ model, and it is widely accepted amongst consumers and providers that future billing for present and emerging services will be content and usage based.
This poses the question as to why consumers are not being charged for content to a greater extent already. Up to now the widespread and continued usage of the flat rate model has been attributed to the fact that most existing service providers have been accustomed to distance and time based billing. It is the comfort zone. It has always worked in the past and it will always work in the future, and, above all, it is simple. However a lot of new services, and some existing services, are Internet Protocol (IP) based. In an IP world, geography is irrelevant. Distance-based billing just does not work. Time-based billing presents even more catastrophic failures, as it is perceived that it discourages the user from using the network. Also, until recently, service providers were locked into the ubiquitous flat-rate business model by their fear that nobody would pay for content when they could get it, or something similar elsewhere for free (similar to freeware software scenario). The Service Providers could also, justifiably, explain their resistance to wringing content for value (through usage, value, service, application or transaction-based charging) by pointing to the absence of sophisticated billing systems adapted to the IP environment by virtue of their ability, extract detailed network usage information and exchange usage/accounting information etc.
The proliferation of consumers’ Quality of Service expectations has also highlighted severe inadequacies with the service providers’ flat-rate based business models. If one considers a simple comparison of the two models whereby, in the traditional model a consumer requests a download of a movie of size three hundred megabytes, but the actual download took 332 megabytes because of some retransmission issues. The provider can only charge for 300 megabytes and hence has lost on 32 megabytes while also failing to provide a quality service (perhaps because of a fault with the network provider). In a content-based model the consumer is charged for the movie not the megabytes i.e. the movie download cost is EUR19.99 irrespective of the time or size of the download or from where it is downloaded.
The Telecommunications Software Systems Group (TSSG) has spent the last five years researching billing and charging software systems to support such a billing model. The TSSG has recently gained recognition for this research through receipt of a substantial grant from Enterprise Ireland to fund the development of a commercial rating product labelled the Rating Bureau System (RBS).

Search Engines

By: Gary McManus
The best thing about the Internet is that there are oodles of pages out there referencing all sorts of information. Unfortunately the worst thing about the Internet is that there are oodles of pages out there referencing all sorts of information. So how do I retrieve the information I need? Answer: Search Engines.
In the following series of articles I will endeavor to explain the concept of search engines and their use, both for retrieval and submission of information.
What is a Search Engine?
A search engine is a system that contains a coordinated set of programs that allow users to enter a search request and return a list of pages that reference this request. These programs include:
* A spider program that goes to every page on every Web site that wants to be searchable and reads it
* A program that creates an index (catalogue) from all the pages that have been read by the spider program
* A program that receives your search request, compares it to the entries in its search index, and returns resulting references to you
There are two primary methods of searching, keyword and concept. The most common method is keyword, with concept offering more of a challenge to search engine companies.
Keyword based systems perform their text query using retrieval of keywords. The web page developer can specify words for indexing, or the search engine, using a predefined method (e.g. first 20 lines) indexes the pages. Most search engines these days will index every word on every page, whereas others will index only certain parts of a page. These indexes can be built up using a subset data on the page (e.g. title, headings and subheadings, links or the first ‘x’ words in the document).
Concept based systems try to determine what you mean as opposed to what you say. These systems return references to documents that are ‘about’ the search request as opposed to exactly what you specified. Using this method, words are examined in relation to other words found nearby on the page. These methods use sophisticated linguistic and artificial intelligence theories to perform the search (too complicated to start explaining here). When certain words and phrases occur close together in a document the system concludes by statistical analysis that the document is ‘about’ a certain topic.
In the next article I will explain how to maximise the use of search engines, and retrieve pages most appropriate to your search area.
The TSSG recommends using Google (http://www.google.com) for keyword searches (it also has a downloadable taskbar for Microsoft’s Internet Explorer which allows you to search at any time without having to go to the Google home page first, see http://taskbar.google.com/). For browsing through broad categories of information the TSSG would currently recommend using Yahoo! (either http://www.yahoo.com for the World, or http://www.yahoo.co.uk for the UK and Ireland). Interestingly, Yahoo! actually uses Google for their own keyword searches!

Convergence of Telecommunications and Inter Networks

By: Mary Nolan
Converge is the title of a project being run by TSSG to investigate the above issue. Since their initial development over thirty years ago, computer communications systems, particularly the Internet, have been designed independently of Public Switched Telecommunications Networks (PSTN). These systems have sometimes interacted, particularly in the use of the PSTN to carry Internet traffic, but attempts have just recently begun to really converge the services provided onto a single integrated network architecture.
These convergence attempts have been motivated by the fact that one of the key requirements in deploying a pervasive and ubiquitous information superhighway is the development of a global integrated telecommunications infrastructure. This global communication network will have to deal with a wide spectrum of traffic characteristics, because the network will have to support, simultaneously, applications that have a wide range of expectations and requirements.
The reasons for the fundamentally different designs of the PSTN and computer networks are several. The PSTN was developed to carry telephone calls, whereas the Internet and other computer networks were intended to transfer asynchronous data between computing devices. Telephony has strict quality of service (QoS) constraints – the user will not tolerate significant time delay, variation in this delay (jitter), loss of communication or unavailability of service.
By contrast, computer communications services have not had such stringent time constraints. Delays of the order of seconds have been considered acceptable. Access to a computer network has generally been with sophisticated devices that are capable of recovering from data loss in the network (i.e. requesting resend of packets, etc). Thus a “best-effort” communications service was considered sufficient.
The different evolutionary paths and different levels of quality of service guarantee have resulted in fundamentally different approaches to charging for the use of these two networks. PSTN users primarily pay for usage on the basis of time and distance, since a fixed portion of a limited resource is guaranteed to the user for the duration of a call (to provide quality of service). By contrast it is unreasonable to ask users to pay for usage of best-effort Internet services where no guarantees of quality of service can be made. Thus, payment for computer communications services has generally been on a monthly flat-rate basis (any time-based charges are just for access over the public telecoms network).
Three major aspects of this convergence are being investigated:
* Quality of Service – mechanisms for delivery of telecoms services over the Internet with sufficient quality guarantees.
* Accounting – pricing of future quality Internet services (while maintaining revenue streams for telecoms companies); systems are required to account for usage and charge correctly.
* Security – granting users access to services (i.e. authorisation); authentication of users (so the correct user is billed); enhancement of privacy of user communications and integrity accounting data gathered.

Concerns about Electronic-Commerce

By: Mary Nolan
Electronic-Commerce otherwise referred to as E-Commerce is defined by Kalakota & Whinston (1997) as:
“The delivery of information/products/ services/payments over phone lines, computer nets & other electronic means”
Products have been available for purchase by telephone for a number of years, a consumer provides credit card details, which are then verified by a teller and the consumer can then place an order, product types could range from saucepans to concert tickets. This method of purchase has proved popular as it is still in use today.
However with the evolution of the Internet, more products and services have become available on-line. This to the consumer should mean a more convenient way of purchasing various goods and services, as there are no more queues for tellers. However significant issues such as security are now emerging. Companies want tools that allow them to know with some certainty that customers are who they claim to be so that billing can be charged against the proper accounts. Similarly, consumers are concerned about the security and privacy of on-line transactions. They want to know that only authorised eyes will see personal account information or credit card numbers. Several dozen companies are working on extensions to Internet protocols to provide these types of security features.
When we make a purchase by telephone we are reasonably assured that our information is safe. However making an on-line purchase is slightly different. A purchase order with consumer’s details is filled in analogous to a consumer filling out an order form and sending it in a paper envelope via postal mail. The consumer is reasonably certain that the order and credit-card
number will arrive safely, as any tampering with the envelope will be blatantly obvious. However Internet mail finds a route to its destination without knowing the identity of each node it passes through and any evidence of tampering is not as evident, this is tempting to unscrupulous individuals.
To defray these risks, many on-line businesses use various security protocols. One in particular is SET, Secure Electronic Transactions protocol. This is an open encryption (used to scramble a message and make it unreadable to intermediate nodes on its route) and security specification designed to protect credit card transactions on the Internet. SET is not itself a payment system. Rather it is a set of security protocols and formats that enables users to employ the existing credit card payment infrastructure on an open network, such as the Internet, in a secure fashion. With the use of Digital Certificates, used to identify each party involved in the transaction, both consumer and company can be sure they are dealing with the correct person.
With the development and improvement of these protocols purchasing on-line will become as much a part of our daily lives as getting out of bed in the morning.

Page 200 of 207« First...102030...198199200201202...Last »