Log In   |   Sign up

New User Registration

Article / Abstract Submission
Register here
Register
Press Release Submission
Register here
Register
coolingZONE Supplier
Register here
Register

Existing User


            Forgot your password
December 2005
library  >  PAPERS  >  Analysis

Safety tips and techniques for FEA in modeling solids



using commercially available finite element analysis (fea) software is easier today than it has ever been. software vendors have made great progress toward providing programs that are as easy to use as they are powerful. but, in doing so, software suppliers have created a perilous quagmire for unwary users. some engineers and managers look upon commercially available fea programs as automated tools for design. in fact, nothing could be further from reality than that simplistic view of today's powerful programs. the engineer who plunges ahead, thinking that a few clicks of the left mouse button will solve all his problems, is certain to encounter some very nasty surprises.

 

with the exception of a very few trivial cases, all finite element solutions are wrong, and they are likely to be more wrong than you think. surprised? you should not be. the finite element method is a numerical technique that provides approximate solutions to the equations of calculus. all such approximate solutions are wrong, to some extent. therefore, the burden is on us to estimate just how wrong our finite element solutions really are, by using a convergence study (which is also known as a mesh refinement study).

 

performing a convergence study is quite simple. but before we plunge into the details of convergence studies, we need to define an important term, ie. degrees of freedom (dof). the dof of a finite element model are unknowns, the calculated quantities. for static structural problems in three dimensions, each node has three dof, the three components of displacement. for heat transfer problems, each node has but one degree of freedom, ie. temperature. increasing the number of elements, all other factors being equal, results in a model with more nodes and, therefore, more dof.

 

doing a convergence study means that we treat number of dof in our finite element model as an additional variable. we see how our function of interest is affected by this additional variable. the easiest way to do this, of course, is to plot the function versus the number of dof in the model. this is illustrated in figure 1.

 

figure 1: illustration of a convergence study. before the knee of the curve,
a small change in the number of dof yields a large change
in the function. after the knee, even a large change in the number of dof
yields only a small change in the function.

 

 

the vertical axis in figure 1 represents the function of interest. this could be temperature, stress, voltage, or any measure of the performance of or products. the horizontal axis, in figure 1, shows the number of dof used for each of the trial solutions. here, we show only four trial solutions. three are an absolute minimum. occasionally, five or more are needed.

 

notice that the trial solutions are not evenly spaced. instead, the number of dof has been doubled with successive trial solutions. this is necessary, to avoid generating trial solutions that are too close to each other. we should not expect to see much difference between a trial solution obtained with 2000 dof and one obtained with 2500 dof. if we were to use trial solutions so closely spaced, we would probably fool ourselves into thinking that we had obtained nearly converged solution. we can avoid this problem by spacing the trial solutions widely. burnett recommends at least doubling the number of dof with successive trial solutions [1].

 

but why do a convergence study at all? why not simply populate the model with, say, 10,000 elements or more, thus insuring an accurate solution? suppose we do have a solution with a 0.01% error in it, but we don't know that that is the error. can we have any confidence in our solution? a solution with even a 0.01% error is worthless to us, if we do not know that that is the error. only after a convergence study, can we have any confidence in our results. but this, of course, is not the only reason for performing a convergence study. there is a second, more compelling reason.

 

we need to estimate the error in our earlier trial solutions, the ones that were obtained with fewer dof. a single finite element solution can provide but one piece of data, one value for our function of interest. the information that we need is the answer to the question, "how do we design this product?" to deduce that answer, we need the data from tens of finite element solutions, not from just a single solution.

 

our most accurate (converged) solution allows us to estimate the error in the previous trial solutions. since it is close to the exact solution, the differences between it and the earlier solutions are really estimates of the errors in the earlier solutions. with those estimates, we can select the model that gives us the least accurate solution that is still acceptable. if we can live with a 10% error in our finite element solution, then we want to use the model that gave us that 10% error, because that model will give us the next thirty or forty solutions economically, giving us the answer to the question "how do we design this product?" this is the greatest benefit provided by finite element analysis.

 

in the last section we talked about accuracy relative to the exact mathematical solution to the defined problem. but there is another kind of accuracy with which we must be concerned: accuracy relative to reality. by the estimate of one experienced analyst, (no, not me), 80% of all finite element solutions are gravely wrong, because the engineers doing the analyses make serious modeling mistakes. our failure to faithfully model the physics of a problem can be devastating, and no convergence study can protect us from the outcome. if we define the problem incorrectly, then we will never see the correct solution. even worse, we are likely to generate and believe in a converged solution to that erroneously defined problem.

 

some years ago, while attending a conference, i watched as one presenter discussed how a finite element model and entire circuit of the board had been used to supposedly improve the thermal performance of the board. i was quite impressed with the work until i asked the presenter how the thermal interface resistance between the components and the board had been modeled. as it turned out, that resistance had not been taken into account.

 

the thermal resistance between two surfaces in contact is attributable to the surface roughness of engineering materials. try as we might to create perfectly smooth surfaces, we can never succeed. this is particularly true in production environment. since thermal conduction is the result of interactions between molecular neighbors, the conduction path between two solids is interrupted at the contact surfaces. there, the two solids touch only where the microscopic high-spots on their respective surfaces touch, as illustrated in figure 2.

 

therefore, the cross-sectional area of the solid-to-solid conduction path is greatly diminished. some thermal energy is indeed transferred from one solid directly to the other solid, through the many contact points (actually minute contact areas). but much of the energy transfer is through the medium that occupies the interstitial voids, the valleys between the many peaks. that medium is usually air, an excellent thermal insulator.

 

 

figure 2: due to the roughness of engineering materials, the conduction path between two
solid surfaces is interrupted. only a fraction of the transferred energy is conducted directly
from one solid to the other. much of the heat transfer that takes place does so through
the medium that occupies the voids between the two surfaces.

 

 

this insulating mechanism is the reason for the ubiquitous heat sink compound that is so popular among electrical engineers. the compound, a thermally conductive gel, fills in the valleys. it displaces the thermally insulating air and provides a better conduction path between the two surfaces.

 

the thermal resistance between surfaces in contact is also a function of the normal stress transmitted across the interface. when components are attached with screws to, say, a heat sink, the clamping force flattens many of the high spots on the contact surfaces and creates many additional contact areas, albeit microscopic ones. this decreases the thermal interface resistance.

 

the absence of a medium must also be considered. at altitudes in excess of 24 km there remains very little air between the contact surfaces. the result is a drastic increase in the thermal interface resistance, as steinberg reports [2].

 

one simple and easy way to account for the thermal interface resistance between two solids when defining a finite element model, is to include in the model and additional, fictitious layer of solid finite elements. we can then specify the thermal conductivity of those elements in the three orthogonal directions, so that we duplicate the physics that exist at the contact surfaces. in the direction perpendicular to the plane of contact, we specify a value of thermal conductivity that will provide, on a macroscopic level, the thermal interface resistance that we determine exists, according to empirical data.

 

in the remaining two orthogonal directions, we specify a thermal conductivity that is arbitrarily low. this forces our finite element model to permit heat flow only through the plane of the fictitious elements, which are defined in a plane parallel to the plane of contact. heat flow within the plane of the fictitious elements is prevented by the arbitrarily low thermal conductivity that we specify for the in-plane directions.

 

is this an exact representation of the physics? of course not. but it is a close approximation, at least on a macroscopic level. however, unless we have made some silly, arithmetic mistake in calculating our contrived thermal conductivity, any error introduced by this modeling technique is likely as small as the discretization error.

 

finite element analysis is a very powerful tool with which to design products of superior quality. like all tools, it can be used properly, or it can be misused. the keys to using this tool successfully are to understand the nature of the calculations that the computer is doing and to pay attention to the physics.

 

tony rizzo
lucent technologies (bell labs)
room 6e-124
67 whippany road
whippany
nj, usa

suggested reading

[1] david s. burnett,
finite element analysis - from concepts to applications,
addison wesley, 1987.

 

[2] dave s. steinberg,
cooling techniques for electronic equipment,
john wiley & sons, 1980.

Choose category and click GO to search for thermal solutions

 
 

Subscribe to Qpedia

a subscription to qpedia monthly thermal magazine from the media partner advanced thermal solutions, inc. (ats)  will give you the most comprehensive and up-to-date source of information about the thermal management of electronics

subscribe

Submit Article

if you have a technical article, and would like it to be published on coolingzone
please send your article in word format to [email protected] or upload it here

Subscribe to coolingZONE

Submit Press Release

if you have a press release and would like it to be published on coolingzone please upload your pr  here

Member Login

Supplier's Directory

Search coolingZONE's Supplier Directory
GO
become a coolingzone supplier

list your company in the coolingzone supplier directory

suppliers log in

Media Partner, Qpedia

qpedia_158_120






Heat Transfer Calculators