Usable or Useless – which will your next product be?
Usability, or so-called human factor engineering, should already be well known to all medical product manufacturers. This is not only due to MDR regulations in the EU and the FDA in the US, but also to basic market requirements, which have remain unchanged since trade was first invented: satisfy your clients. As a result, products that don’t satisfy end customers have never been a particular problem, but it is becoming increasingly harder to find such a product in a shop. Why? Useless products do not stand market verification. Customers make their choices based on a set of simple needs that must be fulfilled. Those needs can only be fulfilled by commercially successful products. Products that fail to fulfil those needs vanish from shop shelves instantly, because it quickly becomes obvious that no one wishes to buy them.
The basic set of needs every customer has is as follows: products must be safe, effective, efficient, satisfying and fun to use. No one is interested in buying products that are merely legally safe, and modern markets open an ocean of opportunities and different choices at customers’ fingertips.
To achieve market success with your next product you will need to do some more than simply design a safe product. Here comes usability, an attribute just as important as safety. In fact, usability is an attribute that will not only improve your product safety, via safer use and lower user errors, but will make product use more efficient by making your customers more effective in their jobs and everyday tasks; they will be much more satisfied and will have more fun performing their routine tasks when using your product. Customers quickly develop certain preferences based on their perception of the products they buy.
Isn’t this too much to ask of a simple medical product interface?
Of course not! Usability is not simply an interface through which a user interacts with the product. Usability is how the user manual is written, how operating instructions are presented and made easy to grasp, how intuitive and satisfying the product is to use, and conversely to what degree product usage remains unclear and frustrating.
Let me give you a brief example. In my professional career I have installed many medical devices in hospitals and trained medical personnel in how to use them. One of these machines required a nurse to enter technical parameters of treatment via an old-fashioned interface. This had to be performed via a set of nine buttons; different letters were selected by multiple pressings of the buttons, like sending a text on an early mobile phone. Then the cursor could be moved to the next position. After several installations and training sessions with this device, I gathered customers’ feedback by asking a simple question: are you satisfied with my visit and your new equipment? The feedback I got was almost always the same, and a pattern of customer frustration started to emerge: ‘Your visit was great, but this product interface is terrible! How can I not make an error using such an interface, under time pressure, pressing those tiny buttons hundreds of times a day?’
From then on, when conducting an installation of a new product in a hospital and training medical personnel, I always spent additional time to pre-program several of the most common sets of treatment parameters in the product’s memory, so a nurse can choose it instantly from an interface menu instead of going through awful finger gymnastics again and again.
Feedback on the product improved immediately, but I have no doubt that this situation could have been avoided entirely if the manufacturer had spent just a little more time designing a usable interface and produced a more usable product.
Medical technology absolutely must be intuitive and easy to use. If it is not, the product suddenly stops healing and starts killing. A minor typo, error or omission could have catastrophic results in everyday healthcare use. Sophisticated and complex technology is taking over the world, and nowhere is this more obvious than in the healthcare industry. In this field, more than any other, it is vital that products are usable, intuitive and easy to operation – simple in use. Errors of use, when they happen in medicine, can have instant and tragic consequences. Only usable products will facilitate the good practice by medical personnel that is essential for positive outcomes.
Usability is a multidisciplinary factor that also crosses over into engineering and science, taking into consideration factors like hardware design, software, psychology, ergonomics, anthropometrics, vision, memory and many more variables unique to human beings. Usability can be succinctly defined as every product attribute being related to the user’s perception of the product.
Today, regulations clearly define the basic requirement of conducting usability testing and obtaining objective evidence of the appropriate usability of the product. This applies not only to sophisticated diagnosis or therapy equipment. This applies to all medical products, no matter how trivial the potential damage to the patient or operator, no matter how low the risk class of the device. All products must undergo usability testing, which must be later described in a usability report for the product, which must form part of product’s technical file. Why? It should be clear by now. Safe and usable products prevent user errors, operate predictably and as expected, and help users detect potential dangers and correct errors before tragedy happens.
Therefore, usability testing is vital and regulators are expressing their expectations of usability studies with increasing regularity. Usability testing gives manufacturers an opportunity for design improvements prior to product release and subsequent errors and accidents in the field. The application of usability engineering in the product development stage gives clear benefits. It not only satisfies medical regulators such as the FDA in the US and Notified Bodies in the EU, but it will boost the manufacturer’s commercial success through improving user satisfaction and feedback, and it will help build the manufacturer’s reputation as a reliable, safe company that considers its customers’ needs in advance.
Usability testing begins with the effective deployment of usability processes within a company. The very firsts steps (defining users, environments, hazards, potential use errors, special factors, etc.) are undertaken to define the future shape and depth of usability testing. Once the process and the scope of testing has been defined and well documented and the prototype is ready, usability testing can commence. The manufacturer chooses a representative group of users, who interact with the product within a representative environment. These users are closely monitored and watched by usability engineers, and all potential use errors, omissions, user habits and actions are noted in detail. These close observations of user interaction with the product must be well documented because they serve as an important input to potential improvements of product usability and risk analysis, and may support decisions regarding the implementation of risk control measures. Every usability test brings new and important information related to potential use errors, many of which will not have been anticipated in the product design phase.
The history of usability engineering in market access regulations stretches to the early 90s, when both the FDA and ANSI/AAMI published the first design guidance. Throughout the last 30 years it became a very well-defined, logical and comprehensive approach, clearly recognized by regulatory requirements globally. Usability engineering became an inevitable part of the product design and development process and plays an important role in limiting unacceptable use errors in the field. Its importance should be of no surprise to anyone, particularly in the modern world of interconnected devices, big data and artificial intelligence. And this is definitely not the end of usability engineering history. This is only the beginning!
Monster or Manual – what is IEC60601-1 ?
The history of medical safety standards over the last three decades is proof that evolution exists, at least within medical safety science. In the beginning, requirements and safe limits were different, defined in different standards, published on different continents, and had to go through semi-harmonized processes. But over time they have developed into a well-defined, holistic and globally harmonized approach to medical product safety and effectiveness, as defined in the current edition of the 60601-1 series of standards. It’s been a long journey, but one of great importance and advantage to medical product manufacturers: we have finally reached a time when safety requirements for medical products are harmonized globally, which significantly streamlines processes and efforts towards safe and effective medical products entering global markets.
However, the revolution in approaches to safety, brought about by the 3rd edition of the IEC60601-1 standards, was not a quick and easy lesson. This standard was published in 2005 and supplanted the previous version, published in 1988. Imagine a mobile phone from 2005, or a laptop, or a car. Would you call it new, modern or innovative today? Revolution can move slowly, but step by step progress is inevitable and can be easily seen, if you look back into past decades.
Similar to the MDD (Medical Device Directive) and MDR (Medical Device Regulation), the medical general safety standard grew tremendously in size between the 2nd and 3rd edition. Unsurprisingly, this sudden size-change gave the new edition the nickname ‘monster’. I have spent several years explaining to people that, rather than being perceived as a monster, it should be considered a complete and full ‘manual’, detailing how to design and develop safe medical products. It is an essential manual for every medical product manufacturer, without which development of medical products would be virtually impossible. Why? It’s simple. Legally, according to the MDD and MDR, in order to be allowed to enter the European market, a medical product must prove its safety and effectiveness through objective evidence. If the manufacturer is not able to prove a product’s safety and effectiveness, the product cannot enter the market, and cannot legally be called a medical product. The most common objective evidence of basic safety and essential performance is the test data and product evaluation result, as described in detail in the safety standard.
I do agree that the change between editions is huge, but not just in size. The most significant change is in the general philosophy of and approach to safe product development, leading to the creation of products that do what they were designed for, consistently and systematically, throughout their lifetime. Historically, medical standards were hardware and type-testing focused, and their requirements were incident-driven. This means that the many requirements related to hardware safety were included in the standard body content due to too many similar accidents having occurred already in the field. In the past, medical products could have been developed in uncontrolled environments, according to unknown processes. The main change resulting from the creation of the standards is that this is not possible anymore. The 3rd edition of the standards introduces a completely new concept, according to which the medical product shall be developed according to well-defined processes. Those processes must be adopted by the med-tech industry as a benchmark in product realisation activities. The major three processes are the Risk Management Process, Usability Engineering Process and Software Lifecycle Process. So, to obtain market access, it is now mandatory not only to create and present a safe and effective medical product; it is also necessary to prove that this product was developed according to correct methodology, that the processes are effectively deployed and implemented, and that they are accessible to all involved in the product realisation activities. This was a massive change in the approach to safety engineering, which brought a lot of anxiety, confusion and ambiguity when it was published for the first time and introduced to the market. Manufacturers took the longest possible time to adapt to the new requirements, just as they are now lagging behind in their preparations for the adoption of MDR.
I would like to calm down the situation and address the potential anxiety and ambiguity. The new edition of the medical safety standards is not a monster! It is a good, comprehensive manual that can help you in your efforts to develop safe and effective products. It is logical and well organized, and it addresses all potential hazards one by one. It is not only a source of important information and definitions, detailing ‘what’ and ‘how’, but it also explains the reasons behind the addition of new requirements.
The general concept of the standard can be defined in one sentence: medical products must remain safe and effective in all intended use and foreseeable misuse conditions, in normal and single fault conditions, throughout their entire lifetime. This is the main objective. Of course, there is much more detail given on the >200 pages of essential requirements and subsequent >200 pages of supplementary guidance, but the general goal is simple, clear and logical: medical products must not create unacceptable hazards while in use. The standard describes in detail many hardware and software solutions that could be utilized, often in combination with each other, in order to achieve this goal.
A good example of such combinations of solutions is the requirement of providing a fireproof enclosure in equipment that, in the case of fire or meltdown of components due to high temperatures, will contain the fire within the equipment and prevent it from spreading. This can be achieved by providing a non-flammable enclosure, the construction of which will not allow fire to escape via vents and baffles. However, if a manufacturer wishes to simulate all potential single faults that may lead to excessive temperature-rise within the equipment, and by that testing can prove that fire, smoke and melting materials will not occur, that non-flammable enclosure will not be required.
Another interesting point that brings wide interpretations is clause 4.5 – alternate risk control measures, which allows manufacturers to define their own methods of demonstrating the safety of the product. Of course, such alternate means of compliance need to be supported by objective evidence and comprehensive engineering judgements, often scientific data, resulting in an equivalent or better outcome than application of the standard methods, limits and test methodologies.
Many say that the biggest change in the new standards is risk management. It was not referenced in medical safety standards before 2005, and suddenly it has become mandatory, but the concept of risk management is also simple, clear and logical. The reason risk management came into safety evaluations of medical products is because we are creating more and more sophisticated and innovative technologies and incorporating them into medical products. Put simply, it would be impossible to define upfront the safety tests, limits and acceptance criteria for all the new engineering concepts being implemented in modern medical products. Therefore, the risk management analysis concept was introduced in addition to the old concept of testing hardware safety alone. The risk management analysis can be utilized in all aspects of product development and is a process that helps designers make the right decisions for safety. This is a process that, when vigilantly followed, will help you improve your products’ safety and effectiveness significantly. It should not be feared: risk management is manufacturers’ great ally in helping to obtain the best possible outcomes.
The med-tech market is the most innovative product development area of them all. Fantastic breakthrough ideas, engineering concepts and new technologies are appearing on design desks every day. The new products completely change how we do our jobs and how we live our lives. This change is inevitable and always will be: like evolution in the biological world, med-tech requires constant change and adjustment in order to best adapt to the constantly evolving world around us.
And we had all better get used to it, because there is one constant that will always be part of our lives.
Radiation Protection around Linear Electron Accelerators
How major radiotherapy incidents led to safety standards development and enforcement
Before we discuss the complex engineering behind linear accelerators’ safety, let’s have a brief catch-up with the standard model of particles. We distinguish several major types of radiation used in medicine associated with different particles. Listed from the heaviest to massless: alpha, beta and gamma. Alpha radiation is transported by heavy particles consisting of two protons and two neutrons. It bears high energy, and is very excited and radioactive, hence its biological effectiveness is also very high. It ionizes everything in its path; however, it also decays immediately in contact with other particles. Therefore, alpha radiation does not have the ability to penetrate the body. Usually anything is enough to shield against alpha radiation, even a sheet of paper. Beta radiation is transported by much lighter single high energy electrons. Its ability to penetrate the body is much higher, depends on energy, and it is measured in centimetres. The third type is gamma radiation, transported by massless gamma photons. Gamma radiation has the highest ability to penetrate matter and the body, so it’s widely used in medical imaging equipment and therapeutic equipment. Lead shielding is commonly used to protect against gamma radiation. Finally, there is also the fourth type: neutron radiation. Neutrons have a very high biological effectiveness, 20 times higher than gamma, but due to not having an electric charge we cannot harness them as we harness and accelerate electron beams. Neutron penetration is very high, depending on energy. The best shielding from neutrons is water, in particular hydrogen atoms in water which have a high probability of interaction with free neutrons. Medicine uses all sorts of radiation, but linear accelerators mainly use just two of them: electron and gamma.
Linear medical electron accelerators, more commonly known as linacs, use a linear magnetic accelerating structure to magnetically speed up electrons generated in an electron gun. The electron beam is transmitted via a bending magnet and hits a tungsten anode. The electrons’ energy is transformed into gamma radiation photons, which then escape the accelerator head via a beam-shaping multileaf collimator. The electrons’ energy directly corresponds to the energy of gamma photons. The typical energy of gamma photons used in radiotherapy is 6–25MeV.
The system is installed in three adjacent locations: the control room, the therapy room and an inaccessible area at the back of gantry, where the UPS, water chiller and power supplies reside. The therapy room is contained within a lead (gamma shield) and concrete (neutron shield) bunker, which shields all the surrounding area from the ionizing radiation being produced by the linac inside.
The first linacs were deployed in healthcare in the 1960s, much earlier than the first MRI systems (1983) and CT scanners (1971). So, the technology should be well developed now, what could possibly go wrong? Well, the entire history of radiotherapy equipment is actually a story of continuous improvements of technological and system safety. Those improvements in technology and healthcare systems came at the unforgettable cost of many tragic incidents that were paid with human lives. But let me start this story from the beginning…
Soon after entering the market, the technology was involved in medical incidents that came unexpectedly and that no one could have anticipated. In 1966 three patients received overdoses in electron therapy on a dual mode radiotherapy accelerator. Dual mode means that the machine can generate two alternative beams: electron or gamma. The committee in charge of investigating the incidents informed accelerators users that medical accelerators are capable of delivering 70 times more radiation to the patient than the intended dose. Hence, the risks associated with hazardous output shall be adequately considered and harnessed within safe limits and control. Two technical problems were identified: irregularity of beam current and a failure of the single internal ion chamber, used to monitor the dose. Two guidance documents for manufacturers were produced, imposing dual and independent electrometer principles, leading to the present design of redundant monitoring ion chamber systems, which terminate irradiation instantly should a different measured dose rate or accumulated dose values appear.
In the 1980s the first linac that was designed to use computer-control in place of hardware interlocks was introduced to the market. An internal computer and software operation were integral to all processes related to output control and execution. The manufacturer conducted a hazard analysis before clinical introduction, but regrettably this did not include the new software-related risks.
In 1985 a patient undergoing a radiation treatment was overdosed, causing serious illness and health consequences. The manufacturer was informed about the incident, but they didn’t accept that an equipment malfunction could cause or contribute to this incident. The same year another patient was overdosed, resulting in the patient’s death. The manufacturer conducted an investigation, which resulted in additional instructions to operators to reconfirm treatment parameters and machine state before initiating a beam by ‘looking at the display’. No product modification was undertaken to limit and avoid such incidents in the future. Soon after, another patient was overdosed. Hospital staff had not been made aware of similar accidents happening in other hospitals. Again, it was not accepted by the manufacturer that equipment malfunction may have contributed to the incident.
The following year, in 1986, another patient overdose happened. Investigation commenced. This time, more details were revealed. It was noted that the audio and video link between the control room and the treatment room was not working, and also that the system ‘shouted’ error messages before the accident. The system was checked and placed back into operation in a few days. Almost immediately, using the same machine, another overdose accident happened. Both patients died. The hospital physicist was able to systematically reproduce the malfunction. In order for the malfunction to happen, it was necessary to repeat the exact keystrokes on the control panel within eight seconds. The overdose conditions were reproduced every time: accelerator produced an incorrect beam type, that didn’t match the set beam type. This was finally a breakthrough: the error was located and could be fixed. The manufacturer recommended a minor modification to be carried out in the field on the product, to avoid the situation reoccurring. Unfortunately, it was not long until the next tragic incident happened: another deadly overdosed patient. Soon after this incident it was admitted that a fault in the software permitted electron beam therapy while the accelerator was set for gamma beam therapy. This incorrect setup of the machine was happening regularly, when parameters in the control panel were changed or modified while the accelerator was setting up all its components to the desired positions. It was obvious that specific design improvements had to be developed and implemented, and standard requirements for linear accelerators must finally be written.
The first requirement was to incorporate all software testing together with hardware, and carry out testing at a system level. Communication between manufacturers and departments was greatly improved. Also, the communication between hospitals and regulatory authorities began to improve in the area of reporting and inspections. Furthermore, communication improved within the radiotherapy physics community. A more coordinated and planned approach to quality control began to develop, and the importance of peripheral equipment, like CCTV and audio, was recognized.
Today, safety standards for linear accelerators used in radiotherapy are available and mandatory for manufacturers. A number of requirements were described in these, in order to address the tragic incidents from the past and prevent reoccurrence. It is now absolutely required that the radiotherapy system is equipped with two independent dose monitoring systems, each capable of terminating the treatment should they indicate a difference above 5%. Also, two independent and separate radiation detectors must be provided. Both systems independently must be capable of terminating irradiation once the intended dose has been delivered, so that the accumulated dose does not exceeded the set dose by more than 10%. Furthermore, it became mandatory to monitor the symmetry of the dose distribution, allowing only 10% difference in the absorbed dose symmetry distribution. Further, an interlock has been introduced in order to ensure that only the selected radiation type can be emitted, and a specific sequence of states of the equipment was introduced in order to clearly organize the operation of the system. Starting with stand-by, through preparatory state, ready state (which can be achieved only without any parameter modification), and finally irradiation state. Before the next irradiation can happen, all parameters must be cleared from the control panel and all displays must be reset to zero.
So, all the problems that caused incidents in the past were taken in consideration, and the IECEE summarized the safety requirements in the IEC60601-2-1 standards. Software development and testing must now be done in accordance with IEC62304. All the critical interlocks are now implemented in both hardware and software independently.
So, was there a happy ending following these tragic incidents?
Have we reached a satisfactory, safe and effective state of the art?
Well, the situation of the safety of ionizing radiation medical equipment was definitely improved, but to assess how effectively we need to take a look at what happened next.
In the late 1980s, 205 patients were massively overdosed in the UK. The root cause of this incident was an error in the dosimetric calibration.
In the 1990s nearly 1000 patients received an underdose up to 30% below the prescribed dose over the course of 10 years, during which time the error remained undetected. A computer programming error during treatment planning commissioning was identified as the cause of those incidents.
In the early 1990s 27 patients were overdosed in Spain, and 20 deaths were directly attributed to the incident. This incident involved a single fault condition within the equipment that appeared during maintenance activities.
In 2001 an incident happened in Poland, resulting in the overdosing of five patients.
As a result of such major incidents, the International Atomic Energy Agency (IAEA) produced substantial reports that recommended formal procedures of equipment handover after maintenance, and quality checks involving physicist review of the state of equipment and its output.
The approach to safety in radiotherapy has undergone a paradigm change, from being component-malfunction focused to patient and system focused, based on a quality assurance programme and quality control baselines for the specific equipment, detailed inventory including all software revisions, etc.
It is clear that every tragic incident in the field had one common element: a chain of events that led to the catastrophic, unacceptable result of a patient’s serious illness or death. The links in this chain of events will be of different natures: use errors, hardware and software malfunctions, equipment calibration, communication of data, and finally processes and procedures not lining up and leaving space for interpretations, assumptions and errors. But now we have developed and implemented many layers of safety, each of them helping to keep the risk of harm to a minimum. With high-risk products there will always be a small risk of serious incidents, but thanks to safety standards, quality control, risk management, usability engineering and software development processes, this risk is minimalized.
We cannot eliminate completely the residual risk associated with end products. This is not possible. But it is absolutely a top priority that all parties involved cooperate, communicate and support each other in the continuous process of improvement. An open-minded and honest attitude is key to improving safety. Dangerous and unacceptable habits of hiding problems and neglect shall be eliminated first on the path to a better and safer future.
No doubt we’ve made great progress towards making technology safer and more effective.
But a long journey still lies in front of manufacturers to improve radiotherapy safety.
My 20 years’ experience in medical engineering
While working within biomedical engineering for the last 20 years, and medical electrical product safety evaluations over the past 12 years, I have noticed that even the largest and most experienced companies can struggle. Market access is a sophisticated process, involving several major aspects of engineering:
Hardware testing, according to well-defined methodology
Software verification and validation
Clinical evaluation of effects on real patients
Endurance and reliability testing (reliable does not equal safe)
Essential performance testing
Usability engineering, risk management
Flammability, biocompatibility, packaging, sterilization
And many, many more!
To make the situation more difficult, different markets have subtle differences in their regulatory requirements, and it is the manufacturer’s responsibility to read, understand and effectively apply these in their organization’s processes.
I have been working with the biggest players in the med-tech market, global organizations employing thousands of people. But I have also worked with one, two or three people startups that managed to achieve their objectives very well and smoothly entered global markets with their innovative products. This may not always have been a smooth process, but when well planned and crafted it can be an adventure and a joy.
I remember my first projects involving risk management. In 2005, when the third edition of the medical safety standard was finally published, it seemed to be ignored by the med-tech market for a while. However, around 2010 things intensified once the standard was recognized as state of the art and became mandatory in Europe, with North America and then the rest of the world soon following suit. At that time, I got the overwhelming impression that a risk management approach had taken the medical industry by surprise. Suddenly, hardware testing, drawing isolation diagrams and writing critical components lists had to take a step back on the list of priorities to allow risk management, usability engineering and software lifecycle processes to take the lead. And hence, I have spent the last decade evaluating these processes at various medical organizations, but also teaching many people how to create and implement them effectively.
After conducting dozens of evaluations of risk management processes and risk management files, and having none fully complying with all requirements, I realized I must get involved more in teaching. I already had experience of teaching mathematical analysis and physics to my colleagues at university, and I always enjoyed this. So I decided to write down the most common issues I have seen in the certification process, the ones that manufacturers most commonly struggle with. I created a long presentation and started travelling to different countries with my ‘601 seminars’. I had various groups of engineers from many different organizations, sometimes only a few, sometimes close to 50 or 100. I quickly realized that this is probably the most important service that I was delivering. My customers, after attending such seminars, were much better prepared for a full certification project, involving a full evaluation of processes such as usability and risk management, but also they were also more aware of why isolation diagrams need to be created early in the R&D process and why appropriate safety components must be listed and carefully selected to meet minimum safety requirements.
There are different approaches to safety and plenty of tools to choose from. Starting from component safety, fault tree analysis (which is quite archaic now), via system safety and hazard-based safety engineering, and ending with processes and user focus to minimize accidents. Accidents cannot be eliminated, but a lot can be done to minimize their occurrences and limit severity. The modern approach to medical product safety is a combination of them all. There is still quite a lot of traditional hardware safety testing, like grounding, leakage current and dielectric. But the focus has shifted from component safety into system safety, with full consideration of all aspects that may significantly influence safety or add to accident probability: the environment where the product is being used, user behaviours and habits, ergonomics, interface, preparation procedures, etc. However, the traditional hazard-based safety approach is still strong in the industry and will be utilized in medical product development for the foreseeable future. A good example here is the IEC62368 standard for information technology and audio / video equipment.
I have been involved in biomedical engineering for the last 20 years. I studied this subject at university, achieved a master’s in medical radiation protection, worked for several medical manufacturers, and finally spent my last 12 years with UL in certification. I can definitely confirm, based on my experiences, that medical engineering is the most innovative industry of them all! How medical procedures are being done today is very different to 20 years ago. We have seen a great development in the field of medical robots, with all global major players developing their own robots. Medical imaging equipment is also going through a period of immense innovation, and is able to achieve resolution unimaginable even a few years ago. New methods of imaging are being implemented in radiotherapy equipment, providing much more focused and precise treatment than before. A good example here would be modern MRI (magnetic resonance imaging) equipment and IGRT (image guided radio therapy) solutions. Who can even remember now the first software-driven medical accelerators, or entry of image iteration and reconstruction methods, used with the early computer technology of the ’80s?
A fascinating example here is the IVD (in vitro diagnosis) market. In 1993 the first commercial IVD test for Hepatitis C virus genotyping was done in Belgium. In 1995 the first commercial Alzheimer’s test was done in the same country. In 1996 the first DNA sequencing methods were developed in Sweden. Now, it is predicted that in the near future DNA sequencing testing will be able to test nearly everything ‘on the go’ for each patient entering a hospital, and the older technology of electrochemical testing will retire. It’s almost unbelievable that, globally, IVD is the largest sector in the med-tech market right now, followed by medical devices for cardiology and orthopaedic treatment, when in the early ’90s we were still learning and developing this technology.
I’ve been working with safety evaluations of medical resonance systems for over a decade now. When I ask my colleagues when do they think this technology came to the market, I have never heard a correct answer. The concept of MRI scanning was developed quite early in the 20th century, but only the development of processors and computer technology allowed the first medical MRI scan to be conducted, in England in 1971. This first medical MRI system, made in Aberdeen, Scotland, is still available to view and admire in the London Science Museum. It is just behind the Apollo 10 capsule, which flew around the moon before returning safely to Earth. A few decades ago, the medical world adapted MRI technology based on superconducting magnets and image reconstruction by computers. Now, the power of the superconducting magnets has reached over 10 Tesla; the speed of image computing is staggering, giving incredible results regarding resolution. We have learned how to image tissue structure, but also cells’ metabolism, which has pushed forward brain mapping efforts and cancer detection accuracy.
Another example of a type of medical product that is very important to me is automated external defibrillators or public access defibrillators. I have certified these for safety for market access in Europe and North America, and now I see them everywhere: at my kids’ school, in superstores, in leisure centres, and every time this sight brings a smile to my face. It is so important that these products are popular in public areas. Almost every medical electrical product has its own unique standard, which describes basic safety and performance expectations, along with the testing and verification methods designed for that device. The concept remains unchanged:
Medical products must remain safe and effective in all intended use and foreseeable misuse conditions, in normal and single fault conditions, throughout their entire lifetime.
The goal of testing and verification is simple and clear: medical products must not create unacceptable hazards while in use, and the application of harmonized standards is now a global state of the art in obtaining objective evidence of this. In application of this principle to defibrillators, this means that the electric impulse not only will be delivered to the patient with the right energy and maximum effectiveness; this also means that the impulse will be delivered only when it is required, and the operator will remain safe while using the device. Sophisticated switching components in these devices make sure that the high voltage in the storage circuit, potentially dangerous to patient and operator, remains inaccessible in normal and single fault conditions.
After 12 years’ involvement in safety evaluation, I have seen with my own eyes that the world is changing, and changing for the better… or should I say, for the safer.
Leaders in the labyrinth – finding and leading the way
Modern business is a complex, sophisticated matter. Companies develop strategies, constantly watch key performance indicators, follow processes, and use expensive tools to earn profit and deliver excellent experiences to their customers. The more complex and multidisciplinary the products, projects and services they provide, the more difficult it is to see clear, strategic imperatives that link to the company vision, the mission and ultimately: profit.
Understanding what your business does is just the first step, and many steps need to be taken in the journey to business success. Everyone in the organization needs to see the purpose and understand the ‘why’. No doubt about it. This is the second step, or the next level, but many organizations fail to achieve this stage. People easily get lost, straying as though in a dense fog looking for directions, simply unable to see the ‘why’, which stays hidden behind a complex business structure and long processes described in endless procedures.
Many say the ultimate goal of a business is profit. But this isn’t quite right. Profit is the purpose of every company, the fuel necessary to prosper, invest and grow. But what can the company do if profit does not come, if the company does essential tasks well, employs hard working people, has good management, has the right tools to do the job, but profit eludes it?
Wiser companies will achieve profit via trust and building lifelong relationships with their customers. The measure of trust in business is NPS (Net Promotor Score), which indicates how likely customers are to recommend the company. Hence, NPS is a good indication of a company’s health and forecast for the near future. But again, why do companies that are stuck in an endless challenge to provide great customer experience receive low or moderate customers satisfaction, despite doing everything possible to improve it?
It’s simple. In a fog of complex strategies, imperatives, commercial goals, visions, missions and processes, people lose their focus and may not see what is most important. This applies to everyone in the company, leaders and managers too. Gaining and retaining customer trust is the single most important task of every manager and leader, and when accomplished it will set up the business for future prosperity and profit. The best way to do this is to maximize talent retention no matter what. Throughout all your actions and behaviours make sure that talented people will always want to work for the company. To achieve this, always listen to them carefully, with trust and confidence.
Once you come to this stage of your business journey, sit down, relax and watch. Watch your profit, satisfaction, market share and all the other Key Performance Indicators grow. And don’t forget to say THANK YOU to all those talented people around you. Simple, isn’t it? Thank You!