Mechanical & Energy Engineering Department Theses and Dissertations

Permanent URI for this collection

Information about the Purdue School of Engineering and Technology Graduate Degree Programs available at IUPUI can be found at: http://www.engr.iupui.edu/academics.shtml

Browse

Recent Submissions

Now showing 1 - 10 of 174
  • Item
    Fabrication and Characterization of Lithium-ion Battery Electrode Filaments Used for Fused Deposition Modeling 3D Printing
    (2022-08) Kindomba, Eli; Zhang, Jing; Zhu, Likun; Schubert, Peter
    Lithium-Ion Batteries (Li-ion batteries or LIBs) have been extensively used in a wide variety of industrial applications and consumer electronics. Additive Manufacturing (AM) or 3D printing (3DP) techniques have evolved to allow the fabrication of complex structures of various compositions in a wide range of applications. The objective of the thesis is to investigate the application of 3DP to fabricate a LIB, using a modified process from the literature [1]. The ultimate goal is to improve the electrochemical performances of LIBs while maintaining design flexibility with a 3D printed 3D architecture. In this research, both the cathode and anode in the form of specifically formulated slurry were extruded into filaments using a high-temperature pellet-based extruder. Specifically, filament composites made of graphite and Polylactic Acid (PLA) were fabricated and tested to produce anodes. Investigations on two other types of PLA-based filament composites respectively made of Lithium Manganese Oxide (LMO) and Lithium Nickel Manganese Cobalt Oxide (NMC) were also conducted to produce cathodes. Several filaments with various materials ratios were formulated in order to optimize printability and battery capacities. Finally, flat battery electrode disks similar to conventional electrodes were fabricated using the fused deposition modeling (FDM) process and assembled in half-cells and full cells. Finally, the electrochemical properties of half cells and full cells were characterized. Additionally, in parallel to the experiment, a 1-D finite element (FE) model was developed to understand the electrochemical performance of the anode half-cells made of graphite. Moreover, a simplified machine learning (ML) model through the Gaussian Process Regression was used to predict the voltage of a certain half-cell based on input parameters such as charge and discharge capacity. The results of this research showed that 3D printing technology is capable to fabricate LIBs. For the 3D printed LIB, cells have improved electrochemical properties by increasing the material content of active materials (i.e., graphite, LMO, and NMC) within the PLA matrix, along with incorporating a plasticizer material. The FE model of graphite anode showed a similar trend of discharge curve as the experiment. Finally, the ML model demonstrated a reasonably good prediction of charge and discharge voltages.
  • Item
    Calibration and Validation of a High-Fidelity Discrete Element Method (DEM) based Soil Model using Physical Terramechanical Experiments
    (2022-08) Ghike, Omkar Ravindra; El-Mounayri, Hazim; Tovar, Andres; Zhang, Jing
    A procedure for calibrating a discrete element (DE) computational soil model for various moisture contents using a conventional Asperity-Spring friction modeling technique is presented in this thesis. The procedure is based on the outcomes of two physical soil experiments: (1) Compression and (2) unconfined shear strength at various levels of normal stress and normal pre-stress. The Compression test is used to calibrate the DE soil plastic strain and elastic strain as a function of Compressive stress. To calibrate the DE inter-particle friction coefficient and adhesion stress as a function of soil plastic strain, the unconfined shear test is used. This thesis describes the experimental test devices and test procedures used to perform the physical terramechanical experiments. The calibration procedure for the DE soil model is demonstrated in this thesis using two types of soil: sand-silt (2NS Sand) and silt-clay(Fine Grain Soil) over 5 different moisture contents: 0%, 4%, 8%, 12%, and 16%. The DE based models response are then validated by comparing them to experimental pressure-sinkage results for circular disks and cones for those two types of soil over 5 different moisture contents. The Mean Absolute Percentage Error (MAPE) during the compression calibration was 26.9% whereas during the unconfined shear calibration, the MAPE was calculated to be 11.38%. Hence, the overall MAPE was calculated to be 19.34% for the entire calibration phase.
  • Item
    Deep Learning Based Crop Row Detection
    (2022-05) Doha, Rashed Mohammad; Anwar, Sohel; Al Hasan, Mohammad; Li, Lingxi
    Detecting crop rows from video frames in real time is a fundamental challenge in the field of precision agriculture. Deep learning based semantic segmentation method, namely U-net, although successful in many tasks related to precision agriculture, performs poorly for solving this task. The reasons include paucity of large scale labeled datasets in this domain, diversity in crops, and the diversity of appearance of the same crops at various stages of their growth. In this work, we discuss the development of a practical real-life crop row detection system in collaboration with an agricultural sprayer company. Our proposed method takes the output of semantic segmentation using U-net, and then apply a clustering based probabilistic temporal calibration which can adapt to different fields and crops without the need for retraining the network. Experimental results validate that our method can be used for both refining the results of the U-net to reduce errors and also for frame interpolation of the input video stream. Upon the availability of more labeled data, we switched our approach from a semi-supervised model to a fully supervised end-to-end crop row detection model using a Feature Pyramid Network or FPN. Central to the FPN is a pyramid pooling module that extracts features from the input image at multiple resolutions. This results in the network’s ability to use both local and global features in classifying pixels to be crop rows. After training the FPN on the labeled dataset, our method obtained a mean IoU or Jaccard Index score of over 70% as reported on the test set. We trained our method on only a subset of the corn dataset and tested its performance on multiple variations of weed pressure and crop growth stages to verify that the performance does translate over the variations and is consistent across the entire dataset.
  • Item
    Analyzing Compressed Air Demand Trends to Develop a Method to Calculate Leaks in a Compressed Air Line Using Time Series Pressure Measurements
    (2022-05) Daniel, Ebin John; Razban, Ali; Goodman, David; Chen, Jie
    Compressed air is a powerful source of stored energy and is used in a variety of applications varying from painting to pressing, making it a versatile tool for manufacturers. Due to the high cost and energy consumption associated with producing compressed air and it’s use within industrial manufacturing, it is often referred to as a fourth utility behind electricity, natural gas, and water. This is the reason why air compressors and associated equipment are often the focus for improvements in the eyes of manufacturing plant managers. As compressed air can be used in multiple ways, the methods used to extract and transfer the energy from this source vary as well. Compressed air can flow through different types of piping, such as aluminum, Polyvinyl Chloride (PVC), rubber, etc. with varying hydraulic diameters, and through different fittings such as 90-degree elbows, T-junctions, valves, etc. which can cause one of the major concerns related to managing the energy consumption of an air compressor, and that is the waste of air through leaks. Air leaks make up a considerable portion of the energy that is wasted in a compressed air system, as they cause a multitude of problems that the compressor will have to make up for to maintain the steady operation of the pneumatic devices on the manufacturing floor that rely on compressed air for their application. When air leaks are formed within the compressed air piping network, they act as continuous consumers and cause not only the siphoning off of said compressed air, put also reduce the pressure that is needed within the pipes. The air compressors will have to work harder to compensate for the losses in the pressure and the amount of air itself, causing an overconsumption of energy and power. Overworking the air compressor also causes the internal equipment to be stretched beyond its capabilities, especially if they are already running at full loads, reducing their total lifespans considerably. In addition, if there are multiple leaks close to the pneumatic devices on the manufacturing floor, the immediate loss in pressure and air can cause the devices to operate inefficiently and thus cause a reduction in production. This will all cumulatively impact the manufacturer considerably when it comes to energy consumption and profits. There are multiple methods of air leak detection and accounting that currently exist so as to understand their impact on the compressed air systems. The methods are usually conducted when the air compressors are running but during the time when there is no, or minimal, active consumption of the air by the pneumatic devices on the manufacturing floor. This time period is usually called non-production hours and generally occur during breaks or between employee shift changes. This time is specifically chosen so that the only air consumption within the piping is that of the leaks and thus, the majority of the energy and power consumed during this time is noted to be used to feed the air leaks. The collected data is then used to extrapolate and calculate the energy and power consumed by these leaks for the rest of the year. There are, however, a few problems that arise when using such a method to understand the effects of the leaks in the system throughout the year. One of the issues is that it is assumed that the air and pressure lost through the found leaks are constant even during the production hours i.e. the hours that there is active air consumption by the pneumatic devices on the floor, which may not be the case due to the increased air flow rates and varying pressure within the line which can cause an increase in the amount of air lost through the same orifices that was initially detected. Another challenge that arises with using only the data collected during a single non-production time period is that there may be additional air leaks that may be created later on, and the energy and power lost due to the newer air leaks would remain unaccounted for. As the initial estimates will not include the additional losses, the effects of the air leaks may be underestimated by the plant managers. To combat said issues, a continuous method of air leak analyses will be required so as to monitor the air compressors’ efficiency in relation to the air leaks in real time. By studying a model that includes both the production, and non-production hours when accounting for the leaks, it was observed that there was a 50.33% increase in the energy losses, and a 82.90% increase in the demand losses that were estimated when the effects of the air leaks were observed continuously and in real time. A real time monitoring system can provide an in-depth understanding of the compressed air system and its efficiency. Managing leaks within a compressed air system can be challenging especially when the amount of energy wasted through these leaks are unaccounted for. The main goal of this research was to find a nonintrusive way to calculate the amount of air as well as energy lost due to these leaks using time series pressure measurements. Previous studies have shown a strong relationship between the pressure difference, and the use of air within pneumatic lines, this correlation along with other factors has been exploited in this research to find a novel and viable method of leak accounting to develop a Continuous Air Leak Monitoring (CALM) system.
  • Item
    RADAR Modeling For Autonomous Vehicle Simulation Environment using Open Source
    (2022-05) Kesury, Tayabali Akhtar; Anwar, Sohel; Tovar, Andres; Li, Lingxi
    Advancement in modern technology has brought with it an advent of increased interest in self-driving. The rapid growth in interest has caused a surge in the development of autonomous vehicles which in turn brought with itself a few challenges. To overcome these new challenges, automotive companies are forced to invest heavily in the research and development of autonomous vehicles. To overcome this challenge, simulations are a great tool in any arsenal that’s inclined towards making progress towards a self-driving autonomous future. There is a massive growth in the amount of computing power in today’s world and with the help of the same computing power, simulations will help test and simulate scenarios to have real time results. However, the challenge does not end here, there is a much bigger hurdle caused by the growing complexities of modelling a complete simulation environment. This thesis focuses on providing a solution for modelling a RADAR sensor for a simulation environment. This research presents a RADAR modeling technique suitable for autonomous vehicle simulation environment using open-source utilities. This study proposes to customize an onboard LiDAR model to the specification of a desired RADAR field of view, resolution, and range and then utilizes a density-based clustering algorithm to generate the RADAR output on an open-source graphical engine such as Unreal Engine (UE). High fidelity RADAR models have recently been developed for proprietary simulation platforms such as MATLAB under its automated driving toolbox. However, open-source RADAR models for open-source simulation platform such as UE are not available. This research focuses on developing a RADAR model on UE using blueprint visual scripting for off-road vehicles. The model discussed in the thesis uses 3D pointcloud data generated from the simulation environment and then clipping the data according to the FOV of the RADAR specification, it clusters the points generated from an object using DBSCAN. The model gives the distance and azimuth to the object from the RADAR sensor in 2D. This model offers the developers a base to build upon and help them develop and test autonomous control algorithms requiring RADAR sensor data. Preliminary simulation results show promise for the proposed RADAR model.
  • Item
    Experimental Measurement of Blood Pressure in 3-D Printed Human Vessels
    (2022-05) Talamantes, John, Jr.; Yu, Huidan (Whitney); Chen, Jie; Zhu, Likun
    A pulsatile flow loop can be suitable for measurement of in vitro blood pressure. The pressure data collected from such a system can be used for evaluating stenosis in human arteries, a condition in which the arterial lumen size is reduced. The objective of this work is to develop an experimental system to simulate blood flow in the human arterial system. This system will measure the in vitro hemodynamics using 3-D prints of vessels extracted from patient CT images. Images are segmented and processed to produce 3-D prints of vessel geometry, which are mounted in the loop. Control of flow and pressure is made possible by the use of components such as a pulsatile heart pump, resistance, and compliance elements. Output data is evaluated by comparison with CFD and invasive measurement. The system is capable of measurement of the pressures such as proximal, Pa, and distal, Pd, pressures to evaluate in vivo conditions and to assess the severity of stenosis. This is determined by use of parameters such as fractional flow reserve (FFR=Pd/Pa) or trans-stenotic pressure gradient (TSPG=Pa-Pd). This can be done on a non-invasive, patient specific basis, to avoid the risk and high cost of invasive measurement. In its operation, the preliminary measurement of blood pressures demonstrates agreement with the invasive measurement as well as the CFD results. These preliminary results are encouraging and can be improved upon by continuing development of the experimental system. A working pulsatile loop has been reached, an initial step taken for continued development. This loop is capable of measuring the flow and pressure from in a 3-D printed artery. Future works will include more life-like material for the artery prints, as well as cadaver vessels.
  • Item
    Modeling of Steel Laser Cutting Process Using Finite Element, Machine Learning, and Kinetic Monte Carlo Methods
    (2022-05) Stangeland, Dillon; Zhang, Jing; Jones, Alan; Daehyun Koo, Dan
    Laser cutting is a manufacturing technology that uses a focused laser beam to melt, burn and vaporize materials, resulting in a high-quality cut edge. Although previous efforts are primarily based on a trial-and-error approach, there is insufficient understanding of the laser cutting process, thus hindering further development of the technology. Therefore, the motivation of this thesis is to address this research need by developing a series of models to understand the thermal and microstructure evolution in the process. The goal of the thesis is to design a tool for optimizing the steel laser cutting process through a modeling approach. The goal will be achieved through three interrelated objectives: (1) understand the thermal field in the laser cutting process of ASTM A36 steel using the finite element (FE) method coupled with the user-defined Moving Heat Source package; (2) apply machine learning method to predict heat-affected zone (HAZ) and kerf, the key features in the laser cutting process; and (3) employ kinetic Monte Carlo (kMC) simulation to simulate the resultant microstructures in the laser cutting process. Specifically, in the finite element model, a laser beam was applied to the model with the parameters of the laser’s power, cut speed, and focal diameter being tested. After receiving results generated by the finite element model, they were then used by two machine learning algorithms to predict the HAZ distance and kerf width that is produced due to the laser cutting process. The two machine learning algorithms tested were a neural network and a support vector machine. Finally, the thermal field was imported into the kMC model as the boundary conditions to predict grain evolution’s in the metals. The results of the research showed that by increasing the focal diameter of a laser, the kerf width can be decreased and the HAZ distance experienced a large decrease. Additionally, a pulse-like pattern was observed in the kerf width through modeling and can be minimized into more of a uniform cut through the increase of the focal diameter. By increasing the power of a laser, the HAZ distance, kerf width, and region of the material above its original temperature increase. Additionally, through the increase of the cut speed, the HAZ distance, kerf width, kerf pulse-like pattern, and region of the material above its original temperature decrease. Through the incorporation of machine learning algorithms, it was found that they can be used to effectively predict the HAZ distance to a certain degree. The Neural Network and Support Vector Machine models both show that the experimental HAZ distance data lines up with the results derived from ANSYS. The Gaussian Process Regression HAZ model shows that the algorithm is not powerful enough to create an accurate prediction. Additionally, all of the kerf width models show that the experimental data is being overfit by the ANSYS results. As such, the kerf width results from ANSYS need additional validation to prove their accuracy. Using the kMC model to examine the microstructure change due to the laser cutting process, three observations were made. First, the largest grain growth occurs at the edge of the laser where the material was not hot enough to be cut. Then, grain growth decays as the distance from the edge increases. Finally, at the edge of the HAZ boundary, grain growth does not occur.
  • Item
    Image Segmentation, Parametric Study, and Supervised Surrogate Modeling of Image-based Computational Fluid Dynamics
    (2022-05) Islam, Md Mahfuzul; Yu, Huidan (Whitney); Du, Xiaoping; Wagner, Diane
    With the recent advancement of computation and imaging technology, Image-based computational fluid dynamics (ICFD) has emerged as a great non-invasive capability to study biomedical flows. These modern technologies increase the potential of computation-aided diagnostics and therapeutics in a patient-specific environment. I studied three components of this image-based computational fluid dynamics process in this work. To ensure accurate medical assessment, realistic computational analysis is needed, for which patient-specific image segmentation of the diseased vessel is of paramount importance. In this work, image segmentation of several human arteries, veins, capillaries, and organs was conducted to use them for further hemodynamic simulations. To accomplish these, several open-source and commercial software packages were implemented. This study incorporates a new computational platform, called InVascular, to quantify the 4D velocity field in image-based pulsatile flows using the Volumetric Lattice Boltzmann Method (VLBM). We also conducted several parametric studies on an idealized case of a 3-D pipe with the dimensions of a human renal artery. We investigated the relationship between stenosis severity and Resistive index (RI). We also explored how pulsatile parameters like heart rate or pulsatile pressure gradient affect RI. As the process of ICFD analysis is based on imaging and other hemodynamic data, it is often time-consuming due to the extensive data processing time. For clinicians to make fast medical decisions regarding their patients, we need rapid and accurate ICFD results. To achieve that, we also developed surrogate models to show the potential of supervised machine learning methods in constructing efficient and precise surrogate models for Hagen-Poiseuille and Womersley flows.
  • Item
    Physics-Based Modelling and Simulation Framework for Multi-Objective Optimization of Lithium-Ion Cells in Electric Vehicle Applications
    (2022-05) Gaonkar, Ashwin; El-Mounayri, Hazim; Tovar, Andres; Zhu, Likun; Shin, Hosop
    In the last years, lithium-ion batteries (LIBs) have become the most important energy storage system for consumer electronics, electric vehicles, and smart grids. The development of lithium-ion batteries (LIBs) based on current practice allows an energy density increase estimated at 10% per year. However, the required power for portable electronic devices is predicted to increase at a much faster rate, namely 20% per year. Similarly, the global electric vehicle battery capacity is expected to increase from around 170 GWh per year today to 1.5 TWh per year in 2030--this is an increase of 125% per year. Without a breakthrough in battery design technology, it will be difficult to keep up with the increasing energy demand. To that end, a design methodology to accelerate the LIB development is needed. This can be achieved through the integration of electro-chemical numerical simulations and machine learning algorithms. To help this cause, this study develops a design methodology and framework using Simcenter Battery Design Studio® (BDS) and Bayesian optimization for design and optimization of cylindrical cell type 18650. The materials of the cathode are Nickel-Cobalt-Aluminum (NCA)/Nickel-Manganese-Cobalt-Aluminum (NMCA), anode is graphite, and electrolyte is Lithium hexafluorophosphate (LiPF6). Bayesian optimization has emerged as a powerful gradient-free optimization methodology to solve optimization problems that involve the evaluation of expensive black-box functions. The black-box functions are simulations of the cyclic performance test in Simcenter Battery Design Studio. The physics model used for this study is based on full system model described by Fuller and Newman. It uses Butler-Volmer Equation for ion-transportation across an interface and solvent diffusion model (Ploehn Model) for Aging of Lithium-Ion Battery Cells. The BDS model considers effects of SEI, cell electrode and microstructure dimensions, and charge-discharge rates to simulate battery degradation. Two objectives are optimized: maximization of the specific energy and minimization of the capacity fade. We perform global sensitivity analysis and see that thickness and porosity of the coating of the LIB electrodes that affect the objective functions the most. As such the design variables selected for this study are thickness and porosity of the electrodes. The thickness is restricted to vary from 22microns to 240microns and the porosity varies from 0.22 to 0.54. Two case studies are carried out using the above-mentioned objective functions and parameters. In the first study, cycling tests of 18650 NCA cathode Li-ion cells are simulated. The cells are charged and discharged using a constant 0.2C rate for 500 cycles. In the second case study a cathode active material more relevant to the electric vehicle industry, Nickel-Manganese-Cobalt-Aluminum (NMCA), is used. Here, the cells are cycled for 5 different charge-discharge scenarios to replicate charge-discharge scenario that an EVs battery module experiences. The results show that the design and optimization methodology can identify cells to satisfy the design objective that extend and improve the pareto front outside the original sampling plan for several practical charge-discharge scenarios which maximize energy density and minimize capacity fade.
  • Item
    Reduction of Mixture Stratification in a Constant-Volume Combustor
    (2021-12) Rowe, Richard Zachary; Nalim, M. Razi; Larriba-Andaluz, Carlos; Yu, Huidan(Whitney)
    This study contributes to a better working knowledge of the equipment being used in a well-established combustion lab. In particular, several constant-volume combustion properties (e.g., time ignition delay, flame propagation, and more) are examined to deduce any buoyancy effects between fuel and air mixtures and to develop a method aimed at minimizing such effects. This study was conducted on an apparatus designed to model the phenomena occurring within a single channel of a wave rotor combustor, which consists of a rotating cylindrical pre-chamber and a fixed rectangular main combustion chamber. Pressure sensors monitor the internal pressures within the both chambers at all times, and two slow-motion videography techniques visually capture combustion phenomena occurring within the main chamber. A new recirculation pump system has been implemented to mitigate stratification within the chamber and produce more precise, reliable results. The apparatus was used in several types of experiments that involved the combustion of various hydrocarbon fuels in the main chamber, including methane, 50%-50% methane-hydrogen, hydrogen, propane, and 46.4%-56.3% methane-argon. Additionally, combustion products created in the pre-chamber from a 1.1 equivalence ratio reaction between 50%-50% methane-hydrogen and air were utilized in the issuing pre-chamber jet for all hot jet ignition tests. In the first set of experiments, a spark plug ignition source was used to study how combustion events travel through the main chamber after different mixing methods were utilized – specifically no mixing, diffusive mixing, and pump circulation mixing. The study reaffirmed that stratification between fuel-air mixtures occurs in the main chamber through the presence of asymmetrical flame front propagation. Allowing time for mixing, however, resulted in more symmetric flame fronts, broader pressure peaks, and reduced combustion time in the channel. While 30 seconds of diffusion helped, it was found that 30 seconds of pumping (at a rate of 30 pumps per 10 seconds) was the most effective method at reducing stratification effects in the system. Next, stationary hot jet ignition experiments were conducted to compare the time between jet injection and main chamber combustion and the speed of the resulting shockwaves between cases with no mixing and 30 seconds of pump mixing. Results continued to show an improvement with the pump cases; ignition delay times were typically shorter, and shock speeds stayed around the same, if not increased slightly. These properties are vital when studying and developing wave rotor combustors, and therefore, reducing stratification (specifically by means of a recirculation system) should be considered a crucial step in laboratory models such as this one. Lastly, experiments between a fueled main chamber and rotating pre-chamber helped evaluate the leakage rate of the traversing hot jet ignition experimental setup paired with the new pump system. In its current form, major leaks are inevitable when attempting traversing jet experiments, especially with the pump’s suction action drawing sudden large plumes of outside air into the main chamber. To minimize leaks, gaps between the pre-chamber and main chamber should be reduced, and the contact surface between the two chambers should be more evenly distributed. Also, the pump system should only be operated as long as needed to evenly distribute the fuel-air mixture, which approximately happens when the main chamber’s total volume has been circulated through the system one time. Therefore, a new pump system with half of the original system’s volume was developed in order to decrease the pumping time and lower the risk of leaks.