top of page

Tel: +44 (0)20 8933 0917   Email: nigel@charig-associates.co.uk

22 results found with an empty search

  • How to effectively brief your technical copywriter for optimum results

    If you have agreed commercial terms with a technical copywriter, you can get down to the business of content generation – and this starts with an appropriate level of briefing. Even the most talented copywriter can only ever be as good as the briefing you provide. The briefing’s level of detail depends on the target content’s format. Technical articles written as components of your content marketing campaign must explicitly deliver quality technical content of value to your audience – while also contributing significantly to your technical and commercial presence with the messages you want to project. So the focus of the article must align with the technologies the company is seeking to promote, and wishes to be recognised for. The briefing for such an article could be delivered like this: -        An initial email to outline the project: the products or services to be covered, the key messages, size and format of the content, deadlines, key stakeholders and their job roles. Any available briefing materials should be attached to the email, or made available through links. This gives the copywriter an opportunity to scope and understand the project and what is expected of them – and to prepare a set of questions to promote dialogue and mutual understanding of the requirements. -        Give the copywriter some time to review the briefing materials and prepare questions, then set up a Teams meeting to discuss the requirement in more detail. Those present to meet with the copywriter might include: o   The project manager, to explain and answer questions about the company’s objectives for the article o   A technical specialist to discuss the technology to be covered in the article o   A marketing specialist if any additional marketing input is required. To ensure that the copywriter understands the messages, technology, and everything else required of them in detail, the meeting could cover the points as suggested below: o   Names and roles of stakeholders in the project o   Content format: Technical article, white paper, thought leadership article, something else o   Who is the target audience? Project managers, engineers, product managers o   Where will the content be placed? Trade journals, customer’s website, conference proceedings, mailing, exhibition handout, company brochure, somewhere else o   Approximate length of piece, in words o   Spelling: UK or US English? o   Deadline for first draft and final completion o   Suggestions for title and subtitle o   Product or service name and brief description o   What customer pain points does the product solve? Energy efficiency, size, weight, speed, reduced time to market, cost, durability, reliability, something else o   Why does this product solve the customer’s problem better than earlier versions, or competing products? o   Establishing credibility; explain the technology that enables the product’s performance, and review the collateral available o   Boost the credibility with a brief case study, and/or testimonials from the enduser or relevant company employee o   Round up with a summary of the key messages Not all briefs require this much effort, though. It depends on the type of document, and on how well the copywriter and customer have come to understand one another. For example, one customer currently asks me to write promotional emails from time to time. Now that we’ve established the format, all I need is an email from them with a data sheet for the new product, and a paragraph explaining the problems it solves, and its benefits. At the beginning of a new customer/copywriter relationship, it’s best to take a painstaking, structured and documented approach, to protect both parties and ensure accurate results first time. But, as the partnership evolves, and mutual trust and understanding grows, it’s possible to be more flexible as appropriate – as long as there remains a clearly-defined mutual understanding of each project’s objectives. In any case, it’s now time for the copywriter to write up the draft, as the first step towards the edited, approved, and finalised document.

  • Comparing X-ray and CSAM Inspection: Which is Better for Failure Analysis?

    Modern electronic – and even electrical – subassemblies are complex devices that can fail for many reasons, either during manufacturing or in the field. Some failure modes, such as delamination, voiding or solder bridging, are mechanical, yet they cannot be easily seen as they occur internally. Breaking the subassembly open if sealed is unattractive, especially if the action destroys the evidence of the fault. Accordingly, a non-destructive way of viewing the subassembly’s internal construction and components is needed; one that supplies images of sufficient resolution to clearly show where the fault lies. In fact, two complementary approaches are available: X-ray inspection and Confocal Scanning Acoustic Microscopy or CSAM. Broadly, we can say that CSAM reveals air gaps, voids, and delamination in materials that X-Ray cannot see so, for example, delamination in a circuit board would be invisible to X-Ray but easily seen in CSAM. Conversely, voids in a BGA ball would be difficult to visualise with CSAM but easy with X-Ray. Below, we look at each approach in more detail. CSAM CSAM techniques detect material-weakening flaws such as surface cracks, voids, delaminations and internal porosity better than any other inspection method. CSAM systems non-destructively inspect materials layer by layer, delivering accurate and comprehensive results for failure analysis, strength, durability and reliability testing, and other insights. SAM uses ultrasound waves to detect changes in acoustic impedances. Pulses of different frequencies are used to penetrate various materials to examine sample interiors for voids or delamination. At interfaces between materials having different acoustic impedances, an acoustic reflection (an echo) occurs. The intensity and polarity of this echo is recorded and presented as a colour map of the sample. One application that highlights the benefits of CSAM inspection involves SiC and GaN semiconductor assemblies used to switch hundreds of kilowatts of power for high-voltage electric vehicle batteries. These assemblies comprise devices mounted on substrates and interconnected through copper conductors that must be very heavy gauge to manage the required power levels. The associated die bonding process can produce voids; these can compromise thermal properties and result in early failures.     While X-ray technology may normally be used, it would be challenged in this case; the X-rays cannot penetrate to the required depth within the copper conductors. By contrast, a CSAM system with its acoustic imaging can penetrate sufficiently within the copper conductors – air gaps and voids reveal themselves clearly. It also runs advanced operating software, with intuitive operator interface menus which help maximize results, while saving operator time. Nordson Dage GEN7 CSAM inspection system X-ray inspection Whereas acoustic imaging works by collecting reflected sound waves, X-ray images are created by shadow imaging instead of reflection. All material features are shown at once. Rounded objects that would scatter acoustic waves can be imaged in detail. X-ray imaging can reveal wire breakages, for example in BGA or other IC packages, transistors, and diodes, as well as displacement or destruction of small internal parts, that would be invisible to CSAM. In practice, this means that X-ray inspection systems allow quick and easy discovery of soldering defects such as open circuits, solder bridges and shorts – including BGA and other area array package shorts and open circuit connections – as well as insufficient or excess solder, solder voids, and solder quality. It can also identify component defects including bond wire attachment quality, lifted leads, missing, misaligned, misplaced, or faulty components, or incorrect component values [i] . There are different types of X-ray inspection systems, from the ubiquitous 2D types to 3D tomosynthesis and computed tomography (CT) machines. In a 2D system, X-rays produced by an X-ray tube pass through the sample under investigation and into an X-ray detector. This converts the X-rays into visible images that the operator inspects. Any object or material of higher density than its surroundings absorbs more X-rays and casts a darker shadow on the detector. This is very effective for imaging electronics non-destructively since solder locations, device terminations and copper tracks cast different, darker shadows than a laminated PCB circuit board. The greater the density difference between materials, the more clearly the contrast can be seen on the X-ray image. For example, voids or air bubbles within BGA solder balls are much less dense than the surrounding solder and can easily be seen in tin/lead or lead-free solders. Computed tomography (CT)  is a popular technique for creating 3D models from multiple 2D X-ray images taken at different angles around the object. The term µCT  is often used when looking at electronic samples since it can give spatial information on features approaching micrometre sizes. In a typical system a sample is loaded into a motorised mount which rotates through 360 degrees. X-ray images are taken at regular intervals and powerful reconstruction software uses a technique called back projection to create a 3D model of the sample under inspection. However, for larger samples, the region of interest must first be removed from the board into a smaller format so that it can be rotated close to the tube to create a µm resolution model. In many cases this technique cannot be used non-destructively. Tomosynthesis is a variation of µCT where the sample remains in one position on a sample tray and the detector orbits through 360 degrees around its top. The benefit is that the sample can be positioned very close to the X-ray tube for high magnification images and does not need to be cut in any way since it is no longer rotated. The trade off with this geometry is that only relatively shallow models can be created since the tomosynthesis arrangement yields less height information, however since most PCB samples are relatively thin this is not usually a practical limitation. The Nordson Dage QUADRA 7  is an advanced X-ray inspection machine which provides all the X-ray modes; it has built in tomosynthesis with a quick change to CT. It supports high resolution imaging with a 4k image of 100nm resolution. The QUADRA 7’s industry leading Aspire FP detector captures 6.7 MP images at 30 fps. These images are shown on two 4K UHD monitors, as conventional HD monitors lack the resolution needed to show them. High resolution, high magnification CT image slices can be captured quickly and easily from anywhere on a board or subassembly, using Nordson Dage’s X-Plane technology. Hard to see defects such as interfacial voids can be spotted quickly. Accessories include a heated stage which recreates reflow oven conditions, so solder processes can be watched in real time.      Nordson Dage QUADRA 7 X-ray system and failure analysis tool Conclusion From the above, we can see that X-ray and CSAM are somewhat overlapping yet also complementary technologies. While purchasing a system of each type may appear as a comprehensive and attractive solution, it could be hard to justify for many applications. One alternative is to use an inspection and test house like Cupio. This will give you access to either technology when you need it, without having to invest in capital equipment. It also gives you access to their expertise, as they can advise on which technology should be used, and how it should be set up. They also work closely with the system manufacturers, so they always have the latest technology in terms of updates and new equipment.     https://nts.com/services/testing/non-destructive/scanning-acoustic-microscopy/ https://muanalysis.com/confocal-scanning-acoustic-microscopy-csam/ [i]   https://www.electronics-notes.com/articles/test-methods/automatic-automated-test-ate/axi-x-ray-inspection.php

  • Using x-ray inspection and other techniques to spot counterfeit components

    As electronics manufacturers and distributors face an ongoing and significant threat from counterfeit components, they must protect themselves appropriately. In this article, Andy Bonner of inspection and test specialist Cupio (part of the IPP Group), looks at the counterfeit types that can be encountered, and then discusses how to combat their influx. Forms of counterfeit The Semiconductor Industry Association (SIA) classifies counterfeit parts into the following types: 1.       The part has incorrect or false markings and/or documentation 2.       It is an unauthorised copy 3.       It is not produced by the original component manufacturer (OCM) or is produced by unauthorised contractors 4.       It is an off-specification, defective, or used OCM product sold as “new” or working 5.       It does not conform to OCM design, model, and/or performance standards From the manufacturing user’s perspective, counterfeit components of any, some, or all of the above types could be hidden within incoming goods deliveries. However, they manifest themselves in different ways. Type 1 parts would be clearly identifiable as counterfeits, if we could see inside them, as they won’t have the internal structure implied by their label or documentation. X-ray inspection  can supply the solution here. Types 2 and 3 may not be visually distinguishable – even internally, using x-rays – from good components. However, being of unknown and probably inferior quality, they are liable to field failure, where they’ll create the greatest harm in terms of safety risk, financial cost and reputation damage. Electrical testing  becomes necessary, to reveal shortcomings in their performance. Similarly, Type 4 counterfeits will appear visually like good components. However, as we will see, some can be identified using acoustic micro imaging . Type 5 counterfeits could respond variously to investigation like types 1 – 4, depending on how they have been falsified. Accordingly, a truly comprehensive counterfeit-trapping strategy will include x-ray inspection, electrical testing and acoustic micro-imaging working in complement to one another.    X-ray inspection High-resolution x-ray inspection allows users to instantly see inside incoming components without destroying them, and compare the images with those from known good parts. Components that appear externally identical often have internal differences if they are from different manufacturers or production lines, as Fig.1 shows.   Fig.1: X-ray images of $10 transimpedance diode – Real  (L) and Counterfeit (R) (Courtesy Nordson Dage) Often, a fake component may be more blatantly falsified. Fig.2’s counterfeit diode, for example, is a wrong part that has been further modified. In any case, anomalies in lead wires, die sizes and positions, and truncated pins, can all be identified. Fig.2: Ultra-fast diode – Real (L) and Counterfeit (R) (Courtesy Nordson Dage)    Overall, x-ray inspection, using a high-resolution x-ray microscope like Nordson Dage’s Explorer one can facilitate rapid detection of counterfeit components. The machine itself is simple to use, and components can be inspected while still sealed within their shipping materials; making them logistically and commercially easier to return if necessary. Electrical testing Electrical testing is used to check a component’s performance against its published specification. This is done by analysing the component’s pins’ electrical characteristics under dynamic stimulus. The pin response relates directly to the component’s nature, internal structure and manufacturing processes. This strongly indicates whether or not the component’s bond wire and die configuration conform to the expected specification.   Fig.3: Component electrical characteristic traces (Courtesy ABI Electronics) Acoustic micro-imaging Acoustic imaging complements X-ray techniques by providing information on different aspects of component integrity. Nordson SONOSCAN acoustic imaging technology, for example, transmits high-frequency sound waves into the sample immersed in deionised water. Reflected sound waves help to accurately identify internal dimensions, cracks, voids, delaminations and interface quality issues characteristic of re-used components. A layer-by-layer analysis of material properties as well as material consistency and thickness helps separate the real components from the imposters.   Fig.4: Acoustic micro-image of two externally near-identical components. The lower component is counterfeit. Accessing counterfeit detection technology As explained, a comprehensive counterfeit detection lineup would comprise three machines – x-ray microscope, acoustic micro-imager and electrical tester. With an x-ray machine, for example, costing in the order of £100,000, this would represent a significant investment. In many cases, this can be amortised, as the equipment would also fulfil roles within production and quality control areas. Larger companies may also be able to justify the expenditure by considering the volume of components processed, and the potential cost of a counterfeit contamination. Alternatively, companies like Cupio offer inspection and testing as a service. With no ties to any component manufacturer or distributor, they can provide impartial third-party advice and results. And their costs, although dependent on the nature and amount of testing, would be a fraction of any capital machine expenditure. Such a service can be made extremely cost-effective simply through customisation to each client’s individual circumstances. A typical approach would start with x-ray inspection, as it’s the simplest and quickest to perform. Any samples appearing as visually good, while still in doubt, could then be subjected to detailed electrical testing.

  • Why 'Back to Basics' articles are useful for marketing

    Do you sometimes find yourself taking on projects which are outside your core competence? This can happen when design capacity is stretched while projects become increasingly complex. One byproduct of this trend is that ‘Back to basics’ articles can be extremely popular, as they offer engineers a starting point for tackling new technology. For example, way back in 2013, I wrote a blog post titled " Back to basics: what are Y-capacitors? ” – and I was recently told by Vicor Marketing that it’s still attracting significant traffic! For more evidence of this trend, visit the Power & Beyond  website and scroll to their 'Most read articles' section. While these instances do not add up to scientific proof, they do suggest that writing “Back to basics” articles about your product or technology could be a useful way of bringing your offering to the attention of an active new audience.

  • Improving UPS system energy efficiency

    Part of my motivation for writing technical content is the exposure to technology advances that I enjoy. This is especially true when such advances bring great benefits such as crop yield improvements or energy savings. A good example was an article I wrote for Kohler Uninterruptible Power, titled ‘Optimising UPS energy efficiency under all conditions’. Savings in this area are particularly important, as, according to Power Engineering International, data centres are projected to consume one-fifth of the world’s electricity – and UPSs contribute significantly to this demand. To see the full article on the Kohler Uninterruptible Power website, visit Optimising UPS energy efficiency under all conditions - KOHLER Uninterruptible Power ( kohler-ups.co.uk ) To see the Power Engineering International report, visit Getting a grip on data centre costs and efficiency - Power Engineering International

  • The difference between a press release and a technical manual

    What’s the difference between a press release and a technical manual?  “They’re so completely different in form and function that you can’t meaningfully compare them!” you may say. And mostly, of course, you would be right. However, as a technical copywriter and content developer who has been involved in both, I can see some common factors. While the overtly promotional style of a press release would certainly be inappropriate in engineering and technical documents like manuals, some aspects of press release design are in fact applicable. -          A press release should concisely describe what the new product is, and how readers could benefit from using it. A manual should explain the product’s functions, and how readers should use them, while being similarly succinct. -          A press release should be written to engage its readers – and so should a technical manual. This can overcome the reluctance that some people have for using the manual; it can also make it easier to digest and understand when users do consult it. -          A press release explicitly projects the company’s image as well as its products. A manual isn’t written with these objectives in mind, yet it can still find itself as an ambassador for the manufacturer that produced it. Imagine you’re assessing a potential new supplier, for example. How will your positive view of the supplier’s quality be influenced by a manual that’s attractively presented, well-written, and rich in useful information? Conversely, how will you react to a manual which is poorly written, hard to follow, and lacking in the detail that you want? Accordingly, while it’s natural to focus on identifying the right engineering material for populating your manual, it’s also worth adding a marketing dimension to the document’s design – maximising its role as an ‘invisible’ sales tool for your company. Below is the cover from a manual I worked on some time ago

  • Why power grid vulnerability is a real business threat

    I wrote this article for Kohler Uninterruptible Power : You can see it on their website here . Based on time in years between blackouts, the UK’s National Grid is one of the most reliable systems in the world, bettered only by France, South Korea, and Switzerland [i] . However, this does not mean that organisations can ignore the possibility of a blackout event. There have been four large scale blackouts over the last twenty years, plus many more local power cuts affecting fewer people. Yet these events do not reflect the full picture, as there have been many occasions when we have unknowingly come perilously close to a significant blackout, with only drastic action by the energy providers preventing it from actually happening. One such ‘near miss’ occurred as recently as 20 July, when parts of London were exposed to risk. Surging electricity demand collided with a bottleneck in the grid, leaving the eastern part of the British capital briefly short of power [ii] . Only by paying a record high £9,724.54 per megawatt hour — more than 5,000% higher than the typical price — did the UK avoid homes and businesses going dark. That was the extraordinary cost of persuading Belgium to crank up aging electricity plants to send energy across the English Channel. The crisis shows the growing vulnerability of energy transportation networks — power grids and gas and oil pipelines — across much of the industrialized world after years of low investment and not-in-my-backyard opposition. In a normal situation, without the grid bottleneck, the UK should have been able to send power to the southeast of England from elsewhere in the country — even from distant sites in Scotland, where offshore wind farms are producing more than ever. The problem is that the UK, and the other industrialized nations, aren’t investing enough in their grids, leaving the system exposed. Why the Grid is vulnerable to power failures Grid overloads are not the only threat to grid power availability. While the grid has designed-in resilience to sudden component failure, this may not always be sufficient. This is what happened on 9 August 2019, when Britain experienced its worst blackout for a decade, leaving over a million people without power [iii] . After a lightning strike, three sources of power generation failed within half a second of one another.  6% of generation, or 1.9 GW, was lost.  The grid is only designed to cope with the sudden loss of the largest single component in the power system (normally Sizewell B at 1.2 GW). That could have covered two of these failures, but not all three simultaneously.  Not enough power could be supplied to meet demand, and so 5% of customers were automatically disconnected to prevent the grid from complete collapse. The incident revealed a mix of technical and administrative problems, along with a good measure of bad luck. The whole country, not just the area close to the original lightning strike, was affected equally.  The disconnections were made by the regional distribution network operators (DNOs) who run the local wires, and were automatic and pre-programmed. On top of this, another threat factor is emerging as the traditional grid is currently evolving into the smart grid.  This is happening because power generation is becoming less dependent on low numbers of large-scale, polluting but stable fossil fuel generating stations, instead shifting to smaller but more multitudinous units, including many that are based on cleaner, but less predictable renewable energy sources – mainly wind and solar power. Smart grids integrate the traditional electrical power grid with information and communication technologies (ICT) [iv] . Such integration empowers the electrical utilities providers and consumers, and improves the efficiency and the availability of the power system while constantly monitoring, controlling and managing the demands of customers. However, as smart grid components become visible on line, they become vulnerable to cyber-attacks. A spectacular example of this occurred as far back as 2007, when a Stuxnet virus developed by a hostile state was used to attack an Iranian nuclear power station [v] . How organizations can protect themselves – UPSs and generators While business endusers, from retail sites and manufacturing plants to hospitals and schools, cannot control such events, they can take steps to protect themselves from their effects. They can use UPSs and, if appropriate, generators to ensure that any sensitive on site equipment is never exposed to dangerous supply transgressions or blackouts. Their exact approach can be tailored to the criticality of their electrical load. For example, an organization may decide that, in the event of a blackout, it is not essential to keep their IT facility up and running – provided that their equipment, software and data being processed is not damaged by a sudden supply failure. A UPS alone could provide the protection they need: When it detects a power problem and switches over to its battery, it provides a warning signal to its supported load. This gives the load an opportunity to shut down safely and gracefully during the UPS’s battery autonomy. However, many organizations, especially if they are supporting online transactions, have a crucial need to keep their data processing and communications running continuously through any mains power backout. In such cases, a UPS – generator pair becomes essential. If the mains supply fails, the UPS battery can seamlessly take over while the generator starts up, stabilizes, and is switched over to supply the UPS. The importance of choosing the right UPS supplier Organizations can ensure the most cost-effective, yet most reliable power protection solution for their critical load by discussing their requirements with an experienced, well- established and well-resourced supplier like KOHLER Uninterruptible Power. KOHLER can not only advise on – and supply – an optimum UPS  or UPS – generator  solution, but they can also ensure that the UPS and generator are correctly paired with one another; this cannot be done without a comprehensive understanding of each product and its specifications. References [i]   https://www.drax.com/opinion/…blackout/#chapter-8 [ii]   https://www.bloomberg.com/…per-megawatt-hour-to-avoid-a-blackout [iii]   https://www.drax.com/…blackout/#chapter-3 [iv]   https://www.researchgate.net/…Smart_Grid_Security_Threats_Vulnerabilities… [v]   https://www.bearingpoint.com/…/risk-of-cyber-security-attacks-on-smart-grid/

  • Spacecraft design: Redefining quality for the new frontier

    Traditional space missions use the highest quality, best-qualified components that money can buy. However, some new space applications like LEO satellites are balancing quality against cost. This article, written for and published in Power & Beyond , looks at the factors affecting these design decisions. We’re all aware that space is a brutally hostile environment to men and machines alike. So, you might assume that all spacecraft are designed with the highest quality, most qualified components available – but while this is often true, it isn’t always so. Technology advances are creating a proliferation of many spacecraft types for widely diverse applications, and some are designed with commercial restraints very much in mind. A simplified view of the situation is that there are two basic camps: “Traditional”, and “New Space”. Traditional players comprise military and government organizations responsible for the International and (Chinese) Tiagong space stations, deep space exploration, and lunar and planetary landings. Such organizations take a conservative approach to maintaining stringent mission assurance models and rigorous recipes for device qualification, and this approach has served them well for many years. In any case, their uncompromising approach to reliability is inevitable, considering that their spacecraft often carry humans, and that a failure in space is difficult or impossible to recover from. Accordingly, they have low tolerance for failure and pay top dollar for radiation-hardened (rad-hard), built-for-purpose semiconductors  that are typically considered to be far from state of the art. However, these large, State organizations are no longer the only game in town. As reusable rockets have become available and launch costs continue to drop, traditional and new private operators of satellites are demanding lower-cost access to space to provide space-based solutions for navigation and timing services, and for the transport, agriculture, fishing, civil engineering, and banking industries for a multitude of applications. To address these opportunities, over the last decade, several thousand commercial space companies have been founded around the world. This “new space” community of developers has a higher tolerance for risk but expects significantly lower component costs. The growing popularity of small satellites (called SmallSats, and typically weighing below 300 kg and often much lighter) has further encouraged this approach as the cost of failure is lower and constellations of small satellites can be used to provide system-level redundancy to mask a single SmallSat unit failure. These “new space” developers have a more relaxed approach to mission assurance and are willing to deploy lower-cost COTS devices that use the latest technologies. Given that the vast majority of new space satellites are used in Low Earth Orbit (LEO), where the radiation environment is less extreme, the risk of catastrophic radiation-induced failure is lower (LEO is regarded as being between approximately 300 km to 1,000 km from Earth and residing below the inner Van Allen radiation belt). Space and its environmental challenges to electronics Analog Devices is one company that has been supporting the aerospace and defense markets for over 40 years with high reliability electronic devices. This section of the article is based on – and adds to - their views on how the space environment challenges electronic devices, and some solutions available. The first hurdle for space electronics to overcome is the vibration imposed by the launch vehicle. The demands placed on a rocket and its payload during launch are severe. Rocket launchers generate extreme noise and vibration. Pyroshock is a similar issue; it’s the response of the rocket structure to high frequency, high magnitude stress waves that propagate through it as a result of an explosive charge, like those used in a satellite ejection or the separation of two stages of a multistage rocket. Pyroshock exposure can damage circuit boards, short electrical components, or cause many other issues. Once in space, there is zero pressure, which can significantly impact the behavior of components manufactured under atmospheric pressure. Materials may behave differently, and components designed for Earth conditions may fail in the vacuum of space. Outgassing is another major concern. Plastics, glues, and adhesives can outgas and vapor released by plastic devices can deposit material on optical devices and degrade their performance. Outgassing of volatile silicones in low Earth orbit (LEO) causes a cloud of contaminants around the spacecraft. Contamination from outgassing, venting, leaks, and thruster firing can degrade and modify the external surfaces of the spacecraft. Manufacturing electronic components  using ceramic rather than plastic eliminates this problem. High levels of contamination on surfaces can contribute to electrostatic discharge. Satellites are vulnerable to charging and discharging; satellite charging is a variation in the electrostatic potential of a satellite, with respect to its surrounding low density plasma. The two primary mechanisms responsible for charging are plasma bombardment and photoelectric effects . Discharges as high as 20,000V have been known to occur on satellites in geosynchronous orbits. If protective design measures are not taken, electrostatic discharge, a build-up of energy from the space environment, can damage the devices. A design solution used in geosynchronous earth orbit (GEO) is to coat all the external surfaces of the satellite with a conducting material. The atmosphere in low earth orbit (LEO) comprises about 96% atomic (one atom) oxygen. Atomic oxygen can react with organic materials on spacecraft exteriors and gradually damage them. NASA has addressed this problem by developing a thin film coating that is immune to the reaction with atomic oxygen. Another obstacle is the very high temperature fluctuations encountered by a spacecraft. In the sunlit phase of its orbit, a satellite is heated by the sun, but as it moves around to the shadow side of the Earth, the temperature can drop by as much as 300°C. Similarly, the moon’s surface temperature can reach +200°C by day and drop to -200°C at night. The wide variation of temperatures mechanically stresses components, and may shorten their lifetime and significantly limit their operational functionality. Commercial off-the-shelf components subjected to temperatures above or below the allowable range can fail. These challenges reinforce the value of ceramic components in this environment. Ceramic packages can withstand repeated temperature fluctuations, provide a greater level of hermeticity, and remain functional at higher power levels and temperatures. They provide higher reliability in harsh environments. The vacuum of space is a favorable environment for tin whiskers, so prohibited materials are a concern. Pure tin, zinc, and cadmium plating are prohibited on EEE (Electrical, Electronic, and Electromechanical) parts and associated hardware in space. These materials are subject to spontaneous whisker growth that can cause electrical shorts . Using lead-based solder eliminates the risk of shorts occurring when devices are used in high stress applications. Finally, the space radiation environment can have damaging effects on spacecraft electronics. There are large variations in the levels of and types of radiation a spacecraft may encounter. Missions flying at low Earth orbits, highly elliptical orbits, geostationary orbits, and interplanetary missions have vastly different environments. Additionally, those environments change, as radiation sources are affected by solar activity. UV radiation causes molecular degradation of helium, oxygen, nitrogen, and other gases. The atomic versions of these elements initiate corrosion and erosion of materials. The ions and free electrons in space can cause arcing, which may affect sensitive electronic components. UV degradation changes the composition of materials and can even remove oxygen from them, which affects component performance. The requirements for a launch vehicle are very different to that of a geostationary satellite or a Mars rover. Each space program has to be evaluated in terms of reliability, radiation tolerance, environmental stresses, the launch date, and the expected life cycle of the mission. Power management and system level challenges Components in spacecraft must be designed to reliably overcome these challenges so that they can fulfil their roles in the craft’s mission-critical systems. First among these is power management, on which every other system depends. And power systems must address the reality that solar radiation is the only available power source. Solar panels, batteries, and power distribution units are integral components in power management. Solar panels harness readily available sunlight, to charge a spacecraft’s batteries. Batteries provide backup power whenever solar energy  is insufficient or unavailable. Power distribution units regulate and distribute power to all systems within the spacecraft. Space-grade components are engineered to address these challenges, ensuring the availability and efficient utilization of power during space missions. This means that stable and efficient power can be delivered to other onboards systems including communications, control and navigation, imaging, and spectroscopy. How components are made fit for uncompromising traditional space applications Components that are suitable for traditional space applications – which, as described above, are the most demanding – are defined not only by their manufacturing techniques and materials, but also by how they are tested, certified, qualified, and derated. The requirements for a launch vehicle are very different to that of a geostationary satellite or a Mars rover. Each space program has to be evaluated in terms of reliability, radiation tolerance, environmental stresses, the launch date, and the expected life cycle of the mission. Testing : Space-grade components undergo various tests, including : Burn-in : Detects early failures by operating components under extreme conditions. Non-Destructive Bond Pull : Identifies faulty wire bonds. Temperature Cycle (Thermal Shock) : Exposes components to fluctuating temperatures. Mechanical Shock : Simulates sudden forces or abrupt motion. Constant Acceleration : Tests the effects of acceleration. Particle Impact Noise Detection (PIND) : Detects loose particles. Radiographic Testing : Identifies defects using electromagnetic waves. An evolving spectrum of cost vs quality trade-offs An article in AspenCore Networks’ EEWeb portal, titled ‘The Convergence of Traditional and New Space Electronics Solutions’ describes the difference between traditional and new space strategies – but also shows various strands of convergence. For example, there are indications that members of the traditional camp are starting to relax their strict semiconductor product qualification requirements to utilize the latest technologies. This is evidenced by their willingness to use plastic packages in space missions as opposed to traditional ceramic packages. There is also evidence that the new space camp has recognized that COTS products represent a well-founded risk, so they have been judiciously deploying rad-hard components in space electronics systems to improve reliability. The result of all of this is that traditional and new space engineers are gradually converging on an approach to create systems using more cost-effective but still radiation-hardened electronics. In recent years, there has been a number of initiatives to reduce the high cost of traditional radiation-hardened components. These initiatives include innovative hardening techniques that take advantage of high-volume commercial wafer foundries, the use of commercial IP (such as the Arm processor) that can be utilized in rad-hard integrated circuits, and the steps that are currently being taken to create a plastic package specification that is suitable for use in space. All of these measures are driving down the cost of rad-hard components to facilitate the convergence of the requirements of the traditional and new space developers. An approach that is gaining momentum, particularly in CubeSats (a type of SmallSat), is the judicious use of radiation-hardened components to implement critical system functions and to act as a safety monitor or watchdog to check that system COTS devices are operating correctly. This hybrid approach allows a design team to implement a rad-hard functional base in conjunction with the latest state-of-the-art COTS technologies. A good example would be the use of a COTS graphics processing unit (GPU) in a small satellite that requires high-speed image processing for an on-board camera. The COTS GPU cannot mitigate against radiation effects itself, but it can be managed by a radiation-hardened device to ensure that it is operating correctly and reset if its operation is disturbed by radiation effects. The future of space electronics Just like their terrestrial counterparts, space systems are constantly driven to become smaller, more efficient, and yet more functional – and they are achieving this by embracing Industry 4.0 and its associated trends of automation, additive manufacturing (or 3D printing), machine learning, artificial intelligence  and more. Airbus, for example, embraces the power of additive-layer–manufacturing technology to manufacture RF  components in large volume for its Eurostar satellite. Holding a promise for future electronic devices, particularly in optoelectronics, a team headed by the University of Geneva (UNIGE) in March 2023 created a quantum material that can be used to capture and transmit information within new electronic devices at a very high speed. The presence of force fields in the material generates entirely unique dynamics that are not observed in conventional materials; therefore, electrons can navigate through a curved space. The advent of Industry 5.0 brings transforming trends like the metaverse and quantum computing that can significantly change the technology landscape. These technologies even have the potential to simulate the space environment (microgravity) on Earth. References Changing trends in designing space electronics - EDN The Convergence of Traditional and New Space Electronics Solutions - EEWeb The Engineer - Challenges for Electronic Circuits in Space Applications The Stellar Role of Electronic Components in Space Exploration | Heisener Electronics Understanding PCB Design Techniques for Space-Qualified Applications. - PCB Directory NASA Parts Selection List (NPSL) - General Requirements Understanding Space Qualification of Electronic Components - everything RF NASA Parts Selection List (NPSL) Spacecraft Electrical Power Systems ( nasa.gov ) Understanding Space Qualification of Electronic Components - everything RF The Convergence of Traditional and New Space Electronics Solutions - EEWeb Evolving trends in space electronics - Electronic Products

  • How green hydrogen can be used for power grid balancing

    I wrote this article for Power & Beyond Magazine; you can see it on their website here . Green hydrogen has great potential for zero-carbon energy storage in applications like power grid balancing. This article discusses the technologies involved and the barriers to overcome for ensuring full commercial success. Comprising only one electron and one proton, hydrogen is the simplest and most abundant element on earth – and it can store or deliver a massive amount of clean energy. This makes it a possibly attractive green alternative to battery-based energy storage  in applications like grid balancing, which buffers intermittent renewable energy  sources to meet energy consumers’ real-time requirements. However, the gas rarely exists by itself in nature and must be produced from compounds using electrochemical extraction processes. This creates an issue; while hydrogen handles energy cleanly, its method of production is not necessarily green. So, to fulfil hydrogen’s attractive potential for ‘deep green’ energy storage, we must firstly be sure that any production methods used are truly carbon-free, and secondly, understand and address the challenges associated with harnessing hydrogen on a commercial scale. This article looks at the issues involved. We discuss what is actually meant by ‘green’ hydrogen, and how it can be used in grid balancing applications. Then we look at electrolyzers and fuel cells, which are the key components of such systems. We conclude by reviewing the challenges facing these components and their integration into grid balancing systems. A rainbow of hydrogen colors Green hydrogen is just one (albeit the cleanest) of a rainbow of hydrogen colors as understood within the industry. They reflect the various ways the gas is produced. Green hydrogen : Produced using renewable energy sources like wind  or solar power . As the only type produced in a climate-neutral manner, it plays a vital role in global efforts to achieve net-zero emissions by 2050. Blue hydrogen : Blue hydrogen is derived from natural gas (methane). It’s created through a process called steam reforming, where the carbon generated is captured and stored underground using industrial carbon capture and storage (CCS). Grey hydrogen : The most common form of the gas, and produced from natural gas (methane) without carbon capture. As a result, it has higher emissions than green or blue hydrogen Brown hydrogen : Produced from coal, and the least environmentally friendly option due to direct CO₂ emissions during production. Other hydrogen colors , including turquoise, pink, and yellow, also exist. Green hydrogen is typically produced by electrolysis, in which electricity separates H2O water molecules into hydrogen and oxygen. If the electricity comes from a renewable source such as solar or wind power , the hydrogen produced can be considered as green. Emissions from green hydrogen production can be as little as 43 gCO2e/kgH2 (e = equivalent) of hydrogen produced using electrolysis. In any case, they cannot exceed 3.4 kgCO2e/kgH2 to comply with the EU’s carbon intensity ceiling for a ‘green’ definition. To provide context, one kg of hydrogen provides as much energy as one gallon of gasoline, which produces 9.3 kg/CO2 during combustion. However, costs are a barrier. Without pricing in carbon emissions, grey hydrogen is inexpensive, costing €1 to €2 per kilogram. By contrast, green hydrogen costs €3 to €8/kg in some regions, depending on the availability of abundant, low-cost renewable resources. Yet production costs will decrease over time, due to continuously falling renewable energy production costs, economies of scale, lessons from projects underway and technological advances. As a result, green hydrogen will become more economical. And, as it becomes more attractive, it will increasingly be used in applications from hydrogen vehicles to power grid energy balancing. GREEN HYDROGEN Efficient Electrolysis through Comprehensive Power Conversion Solutions Energy balancing is essential because solar and wind power, while established among the most promising renewable energy technologies, are disadvantaged by their unpredictability and because installations’ peak output rarely coincides with times of peak demand. Hydrogen can address these issues by helping to balance fluctuations in renewable power supply and demand, supporting grid stability and enhancing the integration of intermittent renewable energy sources. It does so by pairing electrolyzers and fuel cells. During times of excess renewable energy production, the electrolyzers act as a load on the grid; they use the renewable grid power to extract and store hydrogen from water. When demand outstrips supply, the stored hydrogen is fed to fuel cells which generate electricity for feeding back into the grid. Below, we take a closer look at how electrolyzers and fuel cells work, and then at some of the issues associated with integrating them into grid networks. Electrolyzers Electrolyzers comprise an anode and a cathode separated by an electrolyte. Electrolyzers function in various ways, mainly governed by the type of electrolyte material involved and the ionic species it conducts. In a polymer electrolyte membrane (PEM) electrolyzer, the electrolyte is a solid specialty plastic material. Water reacts at the anode to form oxygen and positively charged hydrogen ions (protons). The electrons flow through an external circuit and the hydrogen ions selectively move across the PEM to the cathode. At the cathode, hydrogen ions combine with electrons from the external circuit to form hydrogen gas. Anode Reaction: 2H2O → O2 + 4H+ + 4e- Cathode Reaction: 4H+ + 4e- → 2H2. Alkaline electrolyzers operate via transport of hydroxide ions (OH-) through the electrolyte from the cathode to the anode with hydrogen being generated on the cathode side. Electrolyzers using a liquid alkaline solution of sodium or potassium hydroxide as the electrolyte have been commercially available for many years. Newer approaches using solid alkaline exchange membranes (AEM) as the electrolyte are showing promise on the lab scale. Electrolysis is a leading hydrogen production route to achieving the U.S. Department of Energy’s Hydrogen Energy Earthshot’s goal of reducing the cost of clean hydrogen by 80 % to $1 per 1 kilogram in 1 decade ("1 1 1") since its launch on June 7, 2021. Hydrogen produced via electrolysis can result in zero greenhouse gas emissions, if the electrolysis process uses electricity from renewable sources. Fuel cells Fuel cells work like batteries, but they do not run down or need recharging. They produce electricity and heat as long as fuel is supplied. A fuel cell consists of two electrodes - a negative electrode (or anode) and a positive electrode (or cathode) - sandwiched around an electrolyte. A fuel, such as hydrogen, is fed to the anode, and air is fed to the cathode. In a polymer electrolyte membrane (PEM) fuel cell, a catalyst separates hydrogen atoms into protons and electrons, which take different paths to the cathode. The electrons go through an external circuit, creating a flow of electricity. The protons migrate through the electrolyte to the cathode, where they reunite with oxygen and the electrons to produce water and heat. Note that in a fuel cell, the anode is negative relative to the cathode, which is opposite to the polarization of an electrolytic cell. This is because the anode emits electrons into the external circuit, while the cathode receives them from the circuit. PEM cells operate at relatively low temperatures and can quickly vary their output to meet shifting power demands and can be used for stationary power production. Their fast response also makes them ideal for grid support services. Other fuel cell types include direct-methanol, alkaline, phosphoric acid, molten carbonate, and solid oxide, used variously for portable and stationary power applications. As fuel cells produce heat as well as electricity, combined heat and power fuel cells are of interest for fulfilling not only electrical but also heating needs, including hot water and space heating, in houses and other buildings. Total efficiencies as high as 90 % are possible. Regenerative (or reversible) fuel cells are an emerging technology of particular interest to the application we have been discussing – power grid balancing – because they can not only produce electricity from hydrogen and oxygen but can also be reversed and powered with electricity to produce hydrogen and oxygen. Challenges of hydrogen-based grid energy management systems After green hydrogen is produced by electrolysis, it must be stored, possibly transported, and converted back to electricity by fuel cells. All of these stages have issues that must be addressed to achieve commercially-attractive deployment of green hydrogen for grid balancing applications. Production : Electrolysis has an efficiency of around 60-80 % by calorific value. The commercialization and large-scale deployment challenges of electrolysis include a need for improved overall energy efficiency and additional onsite compressors. The low lifetime of electrolyzers, currently below five years, is insufficient. Storage : Hydrogen is stored typically by three methods: compression, cooling, or hybrid. Material based hydrogen storage is also being developed in the form of solids, liquids, or surface-based materials. Hydrogen can be stored on-site or in bulk for production plants and end-use applications, and bulk storage is used in large-scale geographical hydrogen storage systems such as salt caverns, abandoned mines and similar locations. However, there are some storage challenges: High energy requirement in compressed hydrogen storage, due to low specific gravity. Temperature and pressure requirements while storing hydrogen in solid form. Design aspects, legal issues, social concerns, and high cost. Low durability of materials (fiber, metals, polymers etc.) for storage, and potential chemical reactions raise safety concerns. Bulk storage at geographic features may contaminate the hydrogen, creating the need for further purification before end-use. Transport and distribution : The electrolyzers used in grid balancing applications may be located remotely from their client fuel cells; this calls for transportation and distribution of the gas. Common transport and distribution methods include pipelines, high-pressure tube trailers, and liquified hydrogen tankers. Pipelines are currently the least expensive way and are already in use near large refineries and chemical plants. Liquified hydrogen tankers transport hydrogen that has been cooled into liquid form. This increases the density of distributed hydrogen, making it more efficient for transportation than high-pressure tube trailers. However, if the delivery and consumption rates are not matched, the compressed hydrogen will evaporate causing significant losses and ineffective utilization. The challenges in transport and distribution of hydrogen are as follows: The existing hydrogen transportation pipeline infrastructure is not sufficient to meet future demand. Existing natural gas pipelines cannot be directly used for hydrogen due to embrittlement. Mixing hydrogen with natural gas, although an option, significantly affects pipeline life even at 5 % concentration by volume. Fluctuations in temperature during fast transfers of compressed hydrogen have to be controlled optimally to minimize losses and prevent thermal instability. Alternate ways for transporting hydrogen, for example using liquid organic materials as hydrogen carriers, are being researched to enable a low-cost high energy density transfer of hydrogen. Fuel cells : As the last link in the hydrogen-based grid balancing chain, fuel cells also create challenges when being considered for large-scale commercialization. Efficiency, degradation issues, durability, resiliency, size as well as power, and current densities of fuel cells, all need improvement. Also, fuel cell systems are highly complex, especially relating to thermal and water management, purification, and humidification. These issues are exacerbated by insufficient monitoring of the system’s performance and state of health. Additionally, hydrogen fuel cells have a short lifespan and need frequent replacement. Conclusion The above shows the challenges of using hydrogen for grid balancing, and ensuring that it’s green. However, there is huge motivation to see it fulfil its potential, and new developments are continuously being announced. For example, Australian company Hysata claims they have developed a capillary-fed electrolyzer which achieves 95 % efficiency. Meanwhile, researchers at Tohoku University have developed a novel catalyst model which sets new standards in fuel cell technology . There are larger drivers on the world stage as well, including the World Economic Forum; they have launched their Clean Hydrogen Project Accelerator initiative as part of their Transitioning Industrial Clusters platform, to find ways to accelerate its adoption.

  • Reshoring semiconductor packaging capability - a great example of problem-solving technology and an excellent opportunity for content marketing

    Nigel Charig A company that uses innovation to overcome a significant engineering design or manufacturing challenge provides a great opportunity for creating effective content marketing content. Anyone faced with a similar problem will likely relate to the scenario, and, if the time is right, consider how they could also benefit from the solution. And Custom Interconnect Limited’s development of their inhouse semiconductor packaging capability is an excellent example of such innovation. Early last year, I was fortunate in being commissioned to write about a great example of a problem-solving strategy – one which I’m sure most electronics designers and manufacturers could relate to. Titled ‘ Bringing semiconductor packaging back to the UK ’, it describes how Andover-based electronics manufacturer Custom Interconnect Limited (CIL) has bucked the habit of relying on offshore semiconductor manufacturers by developing its own 15,000 sq ft inhouse ISO 7 semiconductor packaging clean room. This runs alongside its long-established PCBA manufacturing facilities. The full article was published in Electronics Sourcing’s April 24 2024 edition . It’s also available on CIL’s own website . If you’re interested in generating content to highlight your products’ problem-solving capabilities, drop me an email on nigel@charig-associates.co.uk or call me on +44 (0)7968 720316. I’ll be happy to discuss content ideas to show your customers how they can solve their design or manufacturing problems with your products.

Search Results

bottom of page