MED-EL Announces U.S. Launch of New MAESTRO™ Cochlear Implant System

14 09 2011

Whoa, yet more changes in the cochlear implant community this week!

System Features World’s Smallest Titanium Cochlear Implant, Up to 50% Extended Battery Life

DURHAM, N.C.–(BUSINESS WIRE)–The MED-EL Corporation announced today the U.S. launch of the new MAESTRO™ Cochlear Implant (CI) System, featuring the world’s smallest, lightest and thinnest titanium cochlear implant, the MED-EL CONCERT. The system also encompasses enhancements to the OPUS 2 audio processor, including a new D Coil that extends battery life by up to 50 percent, MAESTRO System Software 4.0 offering 16 year back-compatibility, and seven exciting new color options. The MAESTRO Cochlear Implant System recently received approval from the U.S. Food and Drug Administration (FDA) and the company has been preparing for rapidly-increasing global demand. Implant centers can now begin placing orders for the new system.

More information after the jump.

Phone call #10,11,12

11 09 2010


I really wanted more practice on my phone after my dip last week. I treated myself to an iPhone (do I NEED any more incentive? lol) and I had a hard time figuring out how to use the answerphone. I’ve never used one in my life! So I asked my stenographer Karen to listen to the answermachine instructions on the iPhone so I would know how to play, save, delete.  Then I proceeded to have fun … Yeah, this phone malarkey is a LOT of fun! – but you’ve got to keep it simple, or you’d be tearing your hair out in frustration.


Karen has an English accent, almost a BBC accent (she’s from Cambridge) so I thought this would be a good one for me to listen to on the phone. I grew up hearing the sound of the BBC News on TV every evening so I am familiar with the sound of vowels in the Queens’ English. I told her to talk to me about anything, went into the kitchen, and rang her phone. She chatted about her day so far and I understood everything she said except for one sentence. After this phone call, SHE was the one in tears, not me. (Ahhhhh, bless!)

Tip: Think about accents and familiarity.


I then had a voicemail message from Stu. I understood all of it but got stuck on 2 words, which meant I didn’t understand 2 sentences. He sent an email after the voicemail message and from this, I had some clues to the subject of the voicemail. I listened to it again and got the full message this time. Whoop! I also discovered that I had been holding the phone incorrectly – it doesn’t need to be halfway up my head – LOL – it sounds louder when aligned with my jaw.

Tip: Try different positions for your phone handset.


Michele then phoned me. ‘Our topic was ‘meeting up’ – place, time, what we’ll do, and a phone number. I was on a roll, and instead of letting the voicemail pick up, I picked up the phone instead – I was a bit nervous as I had no idea what she would say (having clean forgotten our agreed topic!), and Michele has a South African accent which is tricky for me. After the call, Michele said ~

Whoooooo !! Hoooooo!!!!!!

You practically sailed through!! Just a small hesitation on the ‘steps’ (as I said on top of the steps – on where we will meet)

That was some conversation! Hoo boy!!

You got stuck with just two numbers and that was the 1 when I had said 9. (similar again) The other number I had said 5 and you said 9. (again similar) but dang!! – so good!

What was interesting was that you were able to swing the conversation around to your questions too,  for example you asked me whether we would meet up before or at the wine bar on Tuesday – that took me by surprise! I’m soo excited!


Tip: Take the initiative and ask questions on the phone. You’re setting the context and increasing your confidence.

I discovered an interesting thing today. I was talking to a colleague Patrice, telling her about the quality of the voices I am hearing on the phone and the difficulty I am having with understanding. Voices sound different – echoey, tinny, and very artificial. Patrice said that voices sound like that to hearing people too!


To a hearing person, phone voices sound like they are speaking on a radio. Voices are being transmitted over the telephone lines so they are not heard as they are in real life. The bandwidth is limited and so what a person hears is compressed, losing frequencies and therefore richness of speech tones. Isn’t that interesting!

Here are a few ways to practice listening on the phone;

Speaking Clock

In the UK, the number is 0871 789 3642. If dialling from abroad the number is +44 871 789 3642. The cost of calling this number from a BT line is 10p per minute, maximum cost 15p. When dialled the clock will speak for 90 seconds, you can hang up before this time, but the line will be engaged until the 90 seconds has elapsed. The speaking clock service is not available on the Orange or 3 Mobile mobile telephone networks, as they use 123 as the number for their Answerphone services.

You can find the worldwide numbers for speaking clocks after the jump.

Cochlear implant manfacturers’ websites

Advanced Bionics provide materials giving you the opportunity to practice before you talk with your family, friends … and scarier people.

Cochlear offer telephone training (U.S. only).

Call centres

Think about who’s going to leave you on hold for hours while a recorded message is played. There’s your mobile phone network’s customer services number. Your bank is a good challenge, when they put you through to an Indian call centre. A good one is sales calls; keep saying ‘Pardon?’ and let the salesperson repeat themselves ad infinitum, then tell them sorry you’re not interested in buying. Tip from Pidge: Dial 1471 (UK only) to retrieve the last number that called you. This is also known as Last Call Return in other countries.


Ask everyone in your contacts list to leave a voicemail message on your mobile phone, save these, and practice listening to them.  Later on, do the same again with a landline phone which is likely to be of poorer audio quality.

Assistive devices

Use CapTel or to caption your phone calls and support your listening. Try Skype and repeat after your caller, then ask them to type their sentence if you get it wrong; the video facility is also very helpful for facial expressions (and cheating by lip reading!).  Try FM receivers and bluetooth devices, or a direct audio input lead which connects your cochlear implant directly to your phone.

If you have one cochlear implant, you can try using your hearing aid in the other ear; get a neck loop, set cochlear implant and hearing aid to telecoil, and use both ears when talking on the phone. In the US, try TechEar or ClearSounds, and in the UK, try Connevans for neck loops. (Thanks to Sylvia, Paul, and Patty for this tip)

Are there any more ideas or tips out there? I’d love to hear them!

Cochlear implant mapping

4 05 2010

I’ve had 5 “mapping” sessions since activation. A mapping is a reprogramming of the cochlear implant, to readjust the electrical stimulation limits of the electrodes as each user’s brain adapts to sound and fibrous tissue grows over the internal implant. Mappings are typically carried out once a week for the first 6 weeks, then every few months, then twice annually.

At each mapping I was given increased volume and it was an opportunity to address any concerns with the audiologist. This was followed by a coffee break in the hospital cafe then a speech therapy session. I have one more mapping session this week, then my next one is in June when I have my 3 month check.

It’s been a rollercoaster ride. I’ve started with beeps and pushed so hard that I got a constant whine when I put the implant on. This set me back and I had to slowly build up my tolerance again of high frequency sounds from zero, bit by bit, and have successfully avoided a reocurrence of the whine. I have not yet reached full volume, there is still some way to go, which is kind of scary. I found last week quite difficult as everything seemed too loud and I started feeling stressed, but I hung in there and carried on wearing the cochlear implant until I got used to the increased sound levels.

Increased sound levels can be problematic for cochlear implant users because they are more sensitive to loudness changes. A normal hearing person can hear a wide range of sounds from very soft whispers to loud rock bands; this dynamic range of hearing is about 120dB (normal speech is within the 40-60dB range). However, a cochlear implant processor’s input dynamic range (IDR) or sound window is limited to an electrical current of 20dB, and 120dB of sound needs to be compressed into this. Therefore the cochlear implant user is more sensitive to changes in loudness than a hearing person.

If the IDR is small, sounds outside the IDR need to be omitted or compressed; sounds that are too quiet will be cut off, and sounds that are too loud will be compressed and will sound distorted. The 3 main brands of cochlear implants have different IDRs; Advanced Bionics has 80dB, MedEl 55dB, and Cochlear 45dB but with autoranging. I currently have my IDR set at 60dB.

What actually happens in a mapping session? I replace my processor’s battery with a direct connect input lead to the audie’s computer and put the processor back on my head. (Yeah, this freaked ME out the first time I did this).

The audie’s software will reprogramme my implant’s Harmony external processor.

My cochlear implant has 16 electrodes and when each one is stimulated, I will sense each stimulation as a beep.
The audie will set the Threshold (T) levels [to access soft speech and environmental sounds] and Comfort (M) levels [the amount of electrical current required to perceive a loud signal] for each electrode by judging the most comfortable and the highest stimulation I can tolerate – the most comfortable and loudest beeps I am happy to listen to.

I use the loudness scaling chart to indicate to the audiologist which level each stimulation correlates to, ranging from ‘Just Audible’ to ‘Too Loud’.

Then the audie ensures the M levels are similar in terms of my perception, so that the volume is the same in each electrode – I was able to tolerate very high levels of high frequency sounds this week but she brought these back down, otherwise everything would have sounded weird and unbalanced.

This mapping method is rather tedious and drawn out over several months. Clarujust is new software (currently in FDA trial) from Audigence Inc where the patient and processor are interfaced to a computer and words are played and repeated as heard, the software adjusting the map accordingly. Mapping this way reportedly takes 30 minutes. This software can be used by all hearing aid and cochlear implant companies except Advanced Bionics, however Phonak signed up with Audigence Inc this year prior to the Advanced Bionics/Sonova acquisition.

When a mapping is new, it tends to sound louder, until I get used to it. It takes 3 days to get used to a new mapping, then I find loud sounds have become softer and more tolerable, and I can hear a wider range of soft sounds. It is uncomfortable turning up the volume of life to the max every few days. I still have to brace myself for the jolt first thing in the morning, making the transition from complete silence to a full technicolour river of loud sounds pouring into my brain. Amanda’s Tip of the Day: If you wake with a hangover, take your time to put on your CI and turn down the volume. It helps. A little.

It’s an amazing learning process as I am also trying to identify sounds as well, discovering amazing new ones, and learning to discriminate between things that sound similar to me. My hearing is like a baby, it needs time to learn and grow, but it can be fun too.

Erber’s model set forth 4 levels of auditory skill development;

    Awareness of sound (presence v absence)
    Discrimination (same v different)
    Recognition (associating sound/word with meaning)
    Comprehension (using listening for understanding)

I have now reached the second level, I am hearing things but finding it difficult to discriminate between some sounds. Obviously, this means I am still lipreading. In my speech therapy session this week, I discovered I can’t distinguish between Gear v. Dear, tIme v. tAme. I can listen and fill in a word missing in a written sentence, but listening to a whole sentence and being given one written word is more difficult. With hearing aids, both tasks would have been impossible.

In addition to mapping, my progress is occasionally evaluated with an audiogram and speech perception performance with the cochlear implant in a soundproof booth. These tests assist the mapping process and indicate any further adjustments required. I expect I’ll have this done this week, and hope to have improved upon the 18% I achieved in my last speech perception test.

I was programmed with ClearVoice last week but am still adjusting to my new mapping, so I have just been ‘tasting’ this wonderful addition. I tried it on the train; the roar of the train engine and clashing sounds (brakes or pressure pads? – haven’t worked this sound out yet) dropped away significantly and I could clearly hear voices around me. It was awesome. Yesterday, I was sitting by a window and became conscious of this sound. I realised it was the rain spitting outside. In the garage, I could hear the drumming of the rain on the roof and the traffic outside. With ClearVoice on, the traffic became very quiet and the rain became a very clear high PLINK PLINK PLINK, and a lower PLONK PLONK when it came through a hole in the roof and landed on an object. Again, awesome!

Try out the ClearVoice demo for yourself. Don’t forget to say the mandatory WOW!

Shanti is waiting for her cochlear implant operation date and works as a personal trainer and complementary therapist. She gave me a super aromatherapy massage yesterday and I left feeling very relaxed. As soon as I left, I plugged into my iPod and was amazed to hear that the tinny / Donald Duck tone of vocals had gone from a lot of songs. Perhaps there is a link between relaxation and better hearing. Today, voices sound largely normal and it’s so so so NICE to have some normality again!

Photos courtesy of Byron

The unbearable loudness of hearing

25 04 2010

It has been one month since activation and my world has changed beyond recognition and exploded into a kaleidoscope of sounds. Some are old sounds which sound different, some are completely new. The world sounds different through a cochlear implant and it is starting to sound much better.

Each time I have a mapping, my bionic hearing is adjusted – at the moment we are still focusing on increasing the volume. For the last week I have been listening in awe to the (surprisingly noisy) birds, the crackle and pop of rice krispies, my office clock ticking, the ssss of my perfume atomizer, the jangle of keys and my dog’s clinking collar tag, and all the little sounds my dog makes when he wants something! I am discovering that some things, heretofore silent to me, actually do make a sound. The photocopier touchpad beeps, the door of the room next door squeaks (and now annoys me immensely), my hands rasp when I rub them together and so does moisturiser when rubbed on my skin, running my fingers up my arm makes a soft rasping sound too.

I have been utterly shocked by the cacophonous ssshh of brakes and beeps of doors on public transport, the roar of traffic, people in the street, the sharp crackle of plastic bags and paper, the clatter of crockery, the flushing toilet, the microwave nuking food, and the kill-me-now roar of aeroplanes (unfortunately, I live near Heathrow). Last Saturday was the first day in my life that I was able to hear all the birds so I sat in the garden, in the sunshine, and listened. This also happened to be the first day of the airline stoppages due to the Icelandic volcano eruption and the skies were silent. I only realised how much noise airplanes made this week when the airports re-opened for business. Over the last three days, I have become quite overwhelmed by the loudness of some sounds, now that my implant’s volume is nearing an optimum level.

I went to a social event a few days ago and although noisy, I was able to pick out peoples’ voices more easily which made lipreading easier. I heard this strange sound behind me and turned around to see a woman playing a harp. It sounded totally different from what I expected, like a soft guitar.

The strange thing is that high frequency sounds seem much louder to me than other sounds. A person with a hearing loss cannot screen or ‘filter’ out sounds in the way that hearing people do, so everything seems loud. This is why noisy places are so problematic, as hearing aids and cochlear implants amplify all sounds so that environmental sounds are as loud as voices, and the hearing impaired person is unable to filter out the background noise (the cocktail party effect). Now that the high frequency sounds are so new to my brain, these seem extra loud to me, my brain is going WOW What’s This?, sitting up and taking notice, and is only now listening to low frequency sounds again. The world is starting to sound more normal. Voices still sound tinny so it’s a struggle to understand speech.

I can now hear the dial tone on the phone. I started off by listening to phone sounds (these work on both pc and Mac) and will next try listening to a script I’ll give to a friend.

There are four levels of auditory skill development according to Erber’s model – awareness of sound (presence v absence), discrimination (same v different), recognition (associating sound/word with meaning) and finally, comprehension (using listening for understanding). As I was born deaf and have been deaf for 40 years, I’m going to struggle harder and for longer to climb up this ladder.

It is a common misconception that we hear with our ears. In fact, the auditory system, from the outer ear to the auditory nerve, merely provides the pathway for sound to travel to the brain. It is in the brain that we hear. If a person developed hearing loss after learning language (postlingual hearing loss), the brain will retain an “imprint” of the sounds of spoken language. The longer a person is deaf, the more challenging it is to recall these sounds. In the case of a person who has never heard (hearing loss at birth), or who has had very limited benefit from hearing aids, sound through a cochlear implant will be almost entirely new. That person will need to develop new auditory pathways, along with the memory skills to retain these new sounds. Whatever a person’s history, rehabilitation can be very useful in optimizing experience with a cochlear implant.

Being able to detect sound, even at quiet levels, does not mean that an individual will be able to understand words. Norman Erber developed a simple hierarchy of listening skills that begins with simple detection: being aware that a sound exists. An audiogram indicates detection thresholds. Although thresholds with a cochlear implant may indicate hearing at 20 dB to 40 dB (the range of a mild hearing loss), the ability to understand words can vary greatly. The next level of auditory skill is that of discrimination; that is, being able to determine if two sounds are the same or different. For example, one may detect the vowels oo and ee but not be able to hear a difference between the two sounds. The third level of auditory skill is identification. At this level, one is able to associate a label with the sound or word that is heard. Key words may be clear, such as cloudy or rainy, within the context of listening to a weather report. Erber’s final level of auditory skill is that of comprehension. Words and phrases provide meaningful information rather than just an auditory exercise. At the level of comprehension, a person is able to follow directions, answer questions, and engage in conversation through listening.

(Source: Advanced Bionics)

I’m still at the stage of detecting sounds and trying to move into the next stage of discriminating between sounds.  Two weeks ago, I was unable to tell the difference between PAT and BAT, TIN and DIN, FAN and VAN. With the practice I have done, I am now able to do this with almost no errors. I am now working on listening for the difference between MACE and MICE, and DEAR and GEAR – which is difficult as D and G sound so similar. I don’t know what to listen for so am hoping the brain kicks in at some point!

My speech perception is improving slowly. I have tried to make discrimination practice fun, by listening to Amanda on Skype. She will give me a colour, or a month, or a day of the week, or a number between 1-10. Maybe next I will try tube stations or football teams, whatever I think I can cope with, to keep it fun. We decide which closed set we will do – using Mac to Mac, the in-built sound (and video, for lipreading) quality is very good, aided by my use of a direct-connect lead to my processor. I am trying to work towards ‘open sets’ – unknown sentences – by asking people to put a hand over their mouth and give me a sentence. Patrice gave me my first sentence this week : “Bob and Kirby are waiting for me in the car park” and I got it correct except for the word “car”. She gave me a second sentence and I got that spot on. With practice, I will improve. We tried a discrimination exercise – I am now able to hear the roadworks behind the office – they had been working there for a year and I had missed it all (lucky me). So when they hammered, drilled, or dug with a spade, Patrice told me and I listened for the different sounds.

Music is improving too. I am finding that rock with vocals louder than music wins hands down. Opera sounds good, piano/flute/guitar sound quite good. There are musical resources specifically for CI users. Advanced Bionics offer Musical Atmospheres (free for AB users) and available online or on CD, where new music is discovered through 3 hours of recorded musical examples, each containing increasing levels of complexity in musical appreciation, helping to establish a firm foundation for musical memory. They also offer A Musical Journey, through the Rainforest and Music Time for children. Med El offer Noise Carriers, a musical composition available on CD from – see Listen,Hear! newsletter no.20 for further information. Cochlear don’t seem to have any resources but they do offer tips.

I am finding that I am feeling soooo much better than I did with hearing aids. I used to have headaches almost every day, I was always exhausted from the constant effort of lipreading, reading the palantype (CART), concentrating to make sure I didn’t miss anything, and stressed by the thought of any social event.  Now, I’m not exhausted every evening, I’ve had one headache since activation, lipreading is somehow easier as I’m getting more audio input even though people still sound like Donald Duck, and I feel much more relaxed overall, and more positive about communicating with ducks people.

I’ve finally discovered the noisy world that everyone lives in. This noisy world should get a bit quieter this week when I get ClearVoice, which will automatically reduce background sounds so I can concentrate on voices. It’s almost a magic wand for hearing loss. All I will then need is to be able to comprehend speech, and I’ll do a convincing fake of a Hearing Person.

I’ve lost the clouds but I’m gaining the sky. And the sun will shine. You’ll find me out there, in the Hearing World, shining brightly with happiness. And as the video below nicely demonstrates, I want to kick your butt!

The Farmer’s Cheese

14 04 2010

You’re invited to join MED-EL for a musical performance to celebrate the year 2010 being their 20th anniversary, check out performance dates for the children’s musical “The Farmer’s Cheese”.

Devised for musical theatre by Oliver Searle, based on the children’s book, “The Farmer’s Cheese” is written by Geoff Plant of MED-EL and the Hearing Rehabilitation Foundation. Performed on stage by two actors and a six-piece ensemble, and akin to Peter and the Wolf, all the animals are represented by an individual instrument. The irate farmer remonstrates with the animals in turn while the mouse looks set to win the battle of the cheese.

24th April 2010 at 2.00 pm, Turner Sims Concert Hall, University of Southampton, Salisbury Road, Southampton, SO17 1BJ

8th May 2010, at 3.30 pm, (refreshments available from 3.00 pm), The Drill Hall, 16 Chenies Street, London, WC1E 7EX

For ticket information contact MED-EL Customer Services on
01226 242 879 or email: charles@medel.

Performances last approximately 40 minutes. Tickets are free of charge on a first come first served basis.

If you want to find out more about the cochlear implant music scene, you can find out more at CI Music Scene, where they also have a review of The Farmer’s Cheese. I’ve reproduced it here as it’s so hard to read the text on their website! (Tut tut)

Many thanks to Nicky Broekhuizen of CICS Scotland for her review of the Farmer’s Cheese which first appeared in CICS October Newsletter.

We, along with a number of other CI families, went to the Scottish Storytelling Centre to see ‘The Farmer’s Cheese’, a stage version of a children’s book of the same name by Geoff Plant. The story has been adapted into a musical dramatisation by Oliver Searle, especially targeted at hearing impaired children.

The forty-five minute production was staged very simply ; no scenery, just the six musicians from Symposia, a collective of Glasgow-based musicians, on stage behind the two actors and providing the accompaniment with a range of wind and string instruments, including the flute, violin and the ‘cello. Indeed, it was quickly clear how integral the musicians were to the performance; their silent yet humorous entry onto stage grabbing the children’s immediate attention.

From that moment on the audience were hooked. The ensemble was soon joined on stage by the principals – the cheese-loving farmer and the narrator who, for the second part of the play, donned ears and a tail to turn into the infernal mouse. The plot is straightforward; in the style of ‘the old lady who swallowed a fly’, the farmer buys a succession of animals with the hope they will catch their predecessor. The story is heavy on repetition and told in uncomplicated language, so none of the children had trouble following the action. The actors were completely engaging and physical, holding the attention of the audience throughout and generating a lot of giggles and belly laughs along the way.

The musical accompaniment was a vital, integral part of the performance. In the first part of the play each of the six instruments represented a different animal with a signature melody – not unlike ‘Peter and the Wolf’ – with the flute, for example, bringing to life the scampering, light-footed (and light-fingered) mouse and the prowling cat evoked by the ‘cello. Later, various members of the ensemble played key parts in the farmer’s increasingly desperate, and humorous, schemes to retrieve his cheese with bursts of rock and roll amongst others. This clever weaving of musical instruments into the story was an excellent way in which to introduce live music to implanted children and, with the music never competing with the actor’s voices, it didn’t detract from their understanding of the story.

Tom was on the edge of his seat throughout, utterly absorbed and, from the many chuckles and absence of chatter, it was obvious that he wasn’t the only one. The extent of his understanding was obvious from the complete lack of questions both during the play and afterwards.

So, thank you MED-EL, for sponsoring this work by Oliver Searle and Symposia. Not only has it given hearing impaired children a wonderful introduction to live music and theatre in a truly accessible way, it also brought about an excellent opportunity for families to get together – and we all know how important and useful that is. Let’s hope ‘The Farmer’s Cheese’ reaches other areas of the UK in the future…..

Source: CI Music Scene