The SAIMM is a professional institute with local and international links aimed at assisting members source information about technological developments in the mining, metallurgical and related sectors.
twitter1 facebook1 linkedin logo
 

’It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used.’
– Gottfried Wilhelm Leibniz (1646 - 1716)

’The computer is incredibly fast, accurate, and stupid. Man is unbelievably slow, inaccurate, and brilliant. The marriage of the two is a force beyond calculation.’
– Leo Cherne (1912-1999)

Pyrometallurgical operations are essentially concerned with the high-temperature processing of materials. These chemical processes can be extremely complex, involving reactions between gas, solids, liquid slag, and liquid metal (and sometimes other phases as well).

In addition to the chemistry, consideration must also be given to the energy supply, containment, and flow of the various streams. Pyrometallurgical processes must be evaluated at many stages during their development and design, and when operating changes are introduced. Because experimental work in pyrometallurgy is expensive, a system should be characterized as thoroughly as possible before experimental work is undertaken. Computer simulation allows the requirements of a particular process to be determined quickly and reliably.

Pyrometallurgy is at least 6000 years old, with the oldest known smelters being used for the production of copper in the Middle East. Mathematical modelling, at least in its simplest form, is much older than this. The oldest known mathematical artefact is considered to be the Lebombo bone – a 35 000-year-old baboon fibula discovered in a cave in the Lebombo Mountains in Swaziland in the 1970s – that has a series of 29 notches that were deliberately cut to help to calculate numbers and perhaps also measure the passage of time. The abacus dates from as early as 2400 BC in Babylon, and was also found in China, Egypt, Greece, and Rome, and used by the Aztecs.

The earliest recognisable computers were the Mark I (1944) and Mark II (1945) developed at Harvard University. These electromagnetic computers are seen as the first universal calculators (even though they lacked branching) and operated at three calculations per second. The first electronic general-purpose computer, ENIAC (Electronic Numerical Integrator And Computer) was built in 1946 at the University of Pennsylvania. ENIAC could do simple addition or subtraction of two ten-digit numbers at the rate of 5000 per second, or could do 357 multiplications per second. ENIAC weighed 27 tons, measured 2.4 m × 0.9 m × 30 m, and consumed 150 kW of power. A rather daring prediction was made in the March 1949 issue of Popular Mechanics: ’Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons.’

In my own lifetime, I progressed from using log tables, then a slide rule, to an early scientific calculator in high school. At university, I started programming in Fortran on a mainframe computer that used punched cards (until the late 1970s). I then used a programmable calculator with less than one kilobyte of memory to design heat exchangers (an oddly named piece of equipment: even if there was a caloric-like substance called 'heat', it certainly isn't exchanged) and cyclones. When I started work, early chemical equilibrium calculations were carried out (slowly and expensively) via a Saponet satellite link to the F*A*C*T thermodynamic database (the predecessor of FactSage) hosted on a mainframe computer at McGill University in Montreal. At that time, calculations were limited in the number of elements that could be accommodated, so the time was ripe for a desktop computer program that could be applied to more complex systems. Pyrosim computer software for the steady-state sequential-modular simulation of pyrometallurgical processes was initially developed at Mintek in 1985 on a 64 kB Apple II computer (1 MHz), and was presented at the APCOM 87 conference in 1987. Pyrosim moved to an MS-DOS version in 1988 when it was first used in industry. This software was originally developed to simulate various process routes for the production of raw stainless steel, but the structure was kept general enough to allow it to be used to calculate predictive steady-state mass and energy balances for a very wide range of processes. Pyrosim thermodynamic modelling software was eventually installed at 95 sites in 22 countries on 6 continents.

Desktop computing improved rapidly, with a roughly 1500-fold increase in speed and storage capacity, from the Apple II to a fast Pentium processor. A typical Pyrosim simulation would have taken over an hour on an Apple II, but just over three seconds on a 33 MHz i486 computer (1200 times faster). A study of practical desktop supercomputing was carried out in 1993, using a real-world benchmark, written in ‘C’, to solve a finite-volume energy-transfer problem in two-dimensional cylindrical geometry, using the TriDiagonal Matrix Algorithm. The chosen reference computer was a 50 MHz i486 computer (nominally rated at 30 MIPS, or 30 million instructions per second). At that time, the Pentium processor (rated at 100 MIPS) was not yet commercially available. Using this practical test, it was found that the i486-50 was 80 times faster than the original IBM PC (4.77 MHz with 8087 coprocessor), and the well-established Cray 2 supercomputer was in turn 5.25 times as fast as the i486-50. This indicated that the Cray was only about twice as fast as a Pentium for this type of problem. The Cray 2 was rated at 1600 MFlop/s (million floating-point operations per second). Since that time, the computing speed of the world's fastest supercomputer has increased exponentially, from about 124 GFlop/s (124 × 109) in 1994 to about 34 PFlop/s (34 × 1015) in 2014. A typical personal computer in 2014 was able to run at about 500 GFlop/s – much faster than the world's fastest supercomputer from 1994.

If we assume that computer power will continue to increase in the way it has done for the past twenty years, it is interesting to consider what might become possible in years to come. For example, a fully resolved model of an industrial-scale electric arc (say 50 kA over about 0.5 m) is expected to require over 200 PFlop/s (about half a million times that of the computing power of today's personal computers). At current growth predictions, this should be achieved on the world’s fastest computer in 2017, on the 500th fastest computer in 2024, and on the desktop in 2037.

This edition of the Journal contains a selection of papers from the SAIMM Pyrometallurgical Modelling conference. These papers should provide a good sample of the current activities in this very dynamic field. One of the challenges encountered in modelling pyrometallurgical processes is the chemical complexity of some of the feed materials and products. How best to represent coal has challenged modellers for years, but a systematic approach to this has now been adopted. Thermodynamic modelling has been applied to chemical reaction systems using diverse feed materials and reductants to produce a wide variety of metals, from platinum to clean steel. Techno-economic models bring in the economic aspects too. The extreme conditions in pyrometallurgical reactors have also proved challenging, involving not only very high temperatures, but jets of air at sonic velocity have been used in converting of PGM matte; and comparisons of modelling and industrial trials have been made. Fluid flow analysis has become more widespread and has been used to model the flow in tundishes and converters. A multi-physics approach, involving the interaction between concentration fields and a magneto-hydrodynamic description of an electric arc has been used to study the effect of dust particles in a furnace. Modelling of gas-solid reacting systems has been used to study the sintering of iron ore, as well as rotary kilns for direct reduction.

Pyrometallurgy superficially appears primitive and little changed from hundreds of years ago, but is one of the most challenging areas to understand and model. The simultaneous effects of very high temperatures, energy transfer, fluid dynamics, electromagnetics, phase changes, multiphase flow, free surface flow, particulate materials, and thermochemistry will provide much to interest pyrometallurgical modellers of the future. The dramatic increases in computing power make it possible to carry out different modelling approaches that earlier generations could only have dreamed about.

Pyrometallurgical modelling

 

’It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used.’

– Gottfried Wilhelm Leibniz (1646 - 1716)

 

’The computer is incredibly fast, accurate, and stupid.  Man is unbelievably slow, inaccurate, and brilliant.  The marriage of the two is a force beyond calculation.’

– Leo Cherne (1912-1999)

 

 

Pyrometallurgical operations are essentially concerned with the high-temperature processing of materials. These chemical processes can be extremely complex, involving reactions between gas, solids, liquid slag, and liquid metal (and sometimes other phases as well).  In addition to the chemistry, consideration must also be given to the energy supply, containment, and flow of the various streams.  Pyrometallurgical processes must be evaluated at many stages during their development and design, and when operating changes are introduced.  Because experimental work in pyrometallurgy is expensive, a system should be characterized as thoroughly as possible before experimental work is undertaken.  Computer simulation allows the requirements of a particular process to be determined quickly and reliably.

 

 

Pyrometallurgy is at least 6000 years old, with the oldest known smelters being used for the production of copper in the Middle East.  Mathematical modelling, at least in its simplest form, is much older than this.  The oldest known mathematical artefact is considered to be the Lebombo bone – a 35 000-year-old baboon fibula discovered in a cave in the Lebombo Mountains in Swaziland in the 1970s – that has a series of 29 notches that were deliberately cut to help to calculate numbers and perhaps also measure the passage of time. The abacus  dates from as early as 2400 BC in Babylon, and was also found in China, Egypt, Greece, and Rome, and used by the Aztecs.

 

 

The earliest recognisable computers were the Mark I (1944) and Mark II (1945) developed at Harvard University.  These electromagnetic computers are seen as the first universal calculators (even though they lacked branching) and operated at three calculations per second.  The first electronic general-purpose computer, ENIAC (Electronic Numerical Integrator And Computer) was built in 1946 at the University of Pennsylvania.  ENIAC could do simple addition or subtraction of two ten-digit numbers at the rate of 5000 per second, or could do 357 multiplications per second.  ENIAC weighed 27 tons, measured 2.4 m × 0.9 m × 30 m, and consumed 150 kW of power.  A rather daring prediction was made in the March 1949 issue of Popular Mechanics:  ’Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons.’

 

 

In my own lifetime, I progressed from using log tables, then a slide rule, to an early scientific calculator in high school.  At university, I started programming in Fortran on a mainframe computer that used punched cards (until the late 1970s).  I then used a programmable calculator with less than one kilobyte of memory to design heat exchangers (an oddly named piece of equipment: even if there was a caloric-like substance called 'heat', it certainly isn't exchanged) and cyclones.  When I started work, early chemical equilibrium calculations were carried out (slowly and expensively) via a Saponet satellite link to the F*A*C*T thermodynamic database (the predecessor of FactSage) hosted on a mainframe computer at McGill University in Montreal.  At that time, calculations were limited in the number of elements that could be accommodated, so the time was ripe for a desktop computer program that could be applied to more complex systems.  Pyrosim computer software for the steady-state sequential-modular simulation of pyrometallurgical processes was initially developed at Mintek in 1985 on a 64 kB Apple II computer (1 MHz), and was presented at the APCOM 87 conference in 1987.  Pyrosim moved to an MS-DOS version in 1988 when it was first used in industry.  This software was originally developed to simulate various process routes for the production of raw stainless steel, but the structure was kept general enough to allow it to be used to calculate predictive steady-state mass and energy balances for a very wide range of processes.  Pyrosim thermodynamic modelling software was eventually installed at 95 sites in 22 countries on 6 continents.

 

 

Desktop computing improved rapidly, with a roughly 1500-fold increase in speed and storage capacity, from the Apple II to a fast Pentium processor.  A typical Pyrosim simulation would have taken over an hour on an Apple II, but just over three seconds on a 33 MHz i486 computer (1200 times faster).  A study of practical desktop supercomputing was carried out in 1993, using a real-world benchmark, written in ‘C’, to solve a finite-volume energy-transfer problem in two-dimensional cylindrical geometry, using the TriDiagonal Matrix Algorithm.  The chosen reference computer was a 50 MHz i486 computer (nominally rated at 30 MIPS, or 30 million instructions per second).  At that time, the Pentium processor (rated at 100 MIPS) was not yet commercially available.  Using this practical test, it was found that the i486-50 was 80 times faster than the original IBM PC (4.77 MHz with 8087 coprocessor), and the well-established Cray 2 supercomputer was in turn 5.25 times as fast as the i486-50.  This indicated that the Cray was only about twice as fast as a Pentium for this type of problem.  The Cray 2 was rated at 1600 MFlop/s (million floating-point operations per second).  Since that time, the computing speed of the world's fastest supercomputer has increased exponentially, from about 124 GFlop/s (124 × 109) in 1994 to about 34 PFlop/s (34 × 1015) in 2014.  A typical personal computer in 2014 was able to run at about 500 GFlop/s – much faster than the world's fastest supercomputer from 1994.

 

 

If we assume that computer power will continue to increase in the way it has done for the past twenty years, it is interesting to consider what might become possible in years to come.  For example, a fully resolved model of an industrial-scale electric arc (say 50 kA over about 0.5 m) is expected to require over 200 PFlop/s (about half a million times that of the computing power of today's personal computers).  At current growth predictions, this should be achieved on the world’s fastest computer in 2017, on the 500th fastest computer in 2024, and on the desktop in 2037.

 

 

This edition of the Journal contains a selection of papers from the SAIMM Pyrometallurgical Modelling conference.  These papers should provide a good sample of the current activities in this very dynamic field.  One of the challenges encountered in modelling pyrometallurgical processes is the chemical complexity of some of the feed materials and products.  How best to represent coal has challenged modellers for years, but a systematic approach to this has now been adopted.  Thermodynamic modelling has been applied to chemical reaction systems using diverse feed materials and reductants to produce a wide variety of metals, from platinum to clean steel.  Techno-economic models bring in the economic aspects too.  The extreme conditions in pyrometallurgical reactors have also proved challenging, involving not only very high temperatures, but jets of air at sonic velocity have been used in converting of PGM matte; and comparisons of modelling and industrial trials have been made.  Fluid flow analysis has become more widespread and has been used to model the flow in tundishes and converters.  A multi-physics approach, involving the interaction between concentration fields and a magneto-hydrodynamic description of an electric arc has been used to study the effect of dust particles in a furnace.  Modelling of gas-solid reacting systems has been used to study the sintering of iron ore, as well as rotary kilns for direct reduction.

 

Pyrometallurgy superficially appears primitive and little changed from hundreds of years ago, but is one of the most challenging areas to understand and model.  The simultaneous effects of very high temperatures, energy transfer, fluid dynamics, electromagnetics, phase changes, multiphase flow, free surface flow, particulate materials, and thermochemistry will provide much to interest pyrometallurgical modellers of the future.  The dramatic increases in computing power make it possible to carry out different modelling approaches that earlier generations could only have dreamed about.

 

 

Rodney Jones