Trending December 2023 # Global Model Interpretability Techniques For Black Box Models # Suggested January 2024 # Top 20 Popular

You are reading the article Global Model Interpretability Techniques For Black Box Models updated in December 2023 on the website Daihoichemgio.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Global Model Interpretability Techniques For Black Box Models

This article was published as a part of the Data Science Blogathon.

Introduction

There is no mathematical equation for model interpretability.

‘Interpretability is the degree to which a human can consistently predict the model’s result’

An interpretable model that makes sense is far more trustworthy than an opaque one. There are two reasons for this. First, the business users do not make million-dollar decisions just because a computer said so. Second, the data scientists need interpretable models to ensure that no errors were made in data collection or modeling, which would otherwise cause the model to work well in evaluation, but fail miserably in production.

The importance of interpretability is subjective to the user of the model. The accuracy of a model may be more important than the interpretability of the model in cases where the model is used to power a solution. The data product is communicating with an entity or through an interface that eliminates the need for interpretability. However, when humans are the users of the model, interpretability takes a front seat.

Interpretability is important in fields where the margin of error is low. In the field of Finance, which is empirically a social science, and where there is no logical reason for tomorrow to be similar to any day in the past, it is essential that the users need to understand the model. For example, if we consider the probability of default model, just classifying a customer as ‘good’ or ‘bad’ is not sufficient. The loan approving authorities need a definite scorecard to justify the basis for this classification. The PD model should make sense in regard to the variables used.

Interpretability can be classified as ‘Global’ and ‘Local’.

Global Interpretability:

This level of interpretability is about understanding how the model makes decisions, based on a holistic view of its features and each of the learned components such as weights, other parameters, and structures. Global model interpretability helps to understand the distribution of your target outcome based on the features. For a PD model, it helps in understanding the basis for the classification of ‘good’ or ‘bad’.

Local Interpretability:

This level of interpretability is about understanding a single prediction of a model. Say, if you want to know why a particular customer is classified as ‘bad’, the local interpretability of the model is imperative.

Machine learning algorithms improve prediction accuracy over traditional statistical models. These algorithms are often classified as black-box models. In this article, I will discuss some of the global techniques, which help us to interpret these black-box models.

Implementation & Explanation

I have used the Default of credit card clients dataset from UCI Machine Learning Library for the explanation. The goal was to classify a customer for a default next month (Yes = 1, No = 0).

After pre-processing the data, I have split the data into train and test with the test size as 30%. The data were standardized using StandardScaler() from sklearn.preprocessing. Three black-box models were used to classify the clients – Random Forest, XGBoost, and Support Vector Classifier. I achieved the following evaluation results:

All three models give us more than 80 percent accuracy. Let us now try the Global Interpretation methods for determining feature importance, feature effects, and feature interaction.

Feature Importance: Permutation Importance

Permutation feature importance measures the increase in the prediction error of the model after we permuted the feature’s values, which breaks the relationship between the feature and the true outcome. The permutation feature importance relies on measurements of model error. We, therefore, use the test data here.

The values towards the top are the most important features and those towards the bottom matter least. The first number in each row shows how much model performance decreased with a random shuffling (in this case, using “accuracy” as the performance metric). The number after the ± measures how performance varied from one-reshuffling to the next.

The negative values indicate that the predictions on the shuffled (or noisy) data happened to be more accurate than the real data. This happens when the feature didn’t matter (should have had importance close to 0), but random chance caused the predictions on shuffled data to be more accurate.

Feature Effects: Partial Dependence Plots (PDP’s)

While feature importance shows what variables most affect predictions, partial dependence plots show how a feature affects predictions. Like permutation importance, partial dependence plots are calculated after a model has been fit. The model is fit on real data that has not been artificially manipulated in any way.

We observe a non-analogous behavior for Random Forest versus XGBoost. The main drawback of PDP’s is that they ignore correlations among features. Another technique is ALE which deals with correlations among features.

Feature Effects: Accumulated Local Effects (ALE)

Accumulated local effects describe how features influence the prediction of a machine learning model on average. ALE plots are a faster and unbiased alternative to partial dependence plots (PDP’s). PDP’s suffer from a stringent assumption: features have to be uncorrelated. In real-world scenarios, features are often correlated, whether because some are directly computed from others, or because observed phenomena produce correlated distributions. Accumulated Local Effects (or ALE) plots first proposed by ‘Apley and Zhu (2023)’ alleviate this issue reasonably by using actual conditional marginal distributions instead of considering each marginal distribution of features. This is more reliable when handling (even strongly) correlated variables.

In the python environment, there is no good and stable library for ALE. I’ve only found one alepython, which is still very much in development. It’s also not developed for categorical features.

Feature Interaction: Friedman’s H-statistic

When features interact with each other in a prediction model, the prediction cannot be expressed as the sum of the feature effects, because the effect of one feature depends on the value of the other feature. We are going to deal with two cases: A two-way interaction measure that tells us whether and to what extent two features in the model interact with each other. Friedman and Popescu also propose a test statistic to evaluate whether the H-statistic differs significantly from zero. The null hypothesis is the absence of interaction.

Conclusion

A trend in machine learning is the automation of model training. That includes automated engineering and selection of features, automated hyperparameter optimization, comparison of different models, and ensembling or stacking of the models. Model Interpretability will aid in this process and will eventually be automated itself.

This paper has been dedicated to ‘Sir Christoph Molnar’. The motivation was his holistic work ‘A Guide for Making Black Box Models Explainable’.

Related

You're reading Global Model Interpretability Techniques For Black Box Models

Ai Black Box: A Demystified Guide

Understanding AI black box that refers to AI systems with core workings that are unseen to the user

Some people associate the phrase “black box” with the recording mechanisms used in aircraft that are useful for postmortem examinations in the event of the unthinkable. Others associate it with tiny, sparsely furnished theatres.

But the phrase “black box” is also significant in artificial intelligence. AI “black boxes” are systems that have unobservable internal operations. You can provide input to them and receive output, but you cannot look at the system’s code or the reasoning that led to the output.

The most common branch of artificial intelligence is machine learning. It is the foundation of ChatGPT and DALL-E 2, two generative AI systems. Machine learning consists of a model, training data, and a method or group of algorithms. An algorithm is a collection of steps. In machine learning, an algorithm is trained on a sizable collection of examples, or “training data,” and then learns to recognize patterns. A machine-learning model is produced when a machine-learning algorithm has been trained. Humans employ the model.

A machine-learning algorithm, for instance, might be created to find patterns in photos, and the training data might be pictures of dogs. A dog spotter machine learning model would be created as a result. It would take an image as input and return information on whether and where a collection of pixels in the image indicate a dog.

A machine-learning system can have any one of its three components hidden or in a “black box.” The algorithm is widely known, as is frequently the case, making using a black box less effective. Thus, AI developers often enclose the model in a black box to safeguard their intellectual property. Another strategy software developers employ is hiding the data used to train the model or placing the training data in a “black box.”

Glass boxes are occasionally used to describe the opposite of a black box. An AI glass box is a system whose training data, models, and algorithms are all publicly accessible. However, some academics refer to certain of even these as “black boxes.”

This is because deep learning algorithms, in particular, still need to be better understood by experts. Researchers in explainable AI strive to create algorithms that, while not necessarily “glass boxes,” are more accessible for people to understand.

Importance of AI Black Box

Black box machine learning techniques and models should generally be avoided. Let’s say a machine-learning algorithm has identified a health issue. Would you like a glass box or a black box model? What about the doctor who issued your treatment plan? She could be interested in learning how the model made its choice.

What happens if a machine-learning model used to verify your eligibility for a bank loan for your business rejects you? Do you want to discover the reason? If you did, you may more successfully challenge the ruling or alter your circumstances to improve your loan prospects.

Black boxes have significant effects on software system security as well. Many people in the computing industry believed for many years that placing software within a black box would prevent hackers from looking at it, making it secure. The ability of hackers to reverse-engineer software or create a copy by carefully studying how a piece of software functions and finding weaknesses to exploit has disproved this presumption.

Simple Techniques For Shooting Close

Close-up photos pull you directly into a subject so you can examine its details from a unique perspective. A close-up tends to focus on a specific thing—an insect, a plant, a flower, or a face, for example. Or it can highlight something we don’t usually pay much attention to, but which turns out to be captivating, dramatic, or revealing when intimately observed.

Close-up photos can tell a powerful story in a single shot: Taking a photo of a person’s weathered hands, for example, might be a way to convey the fact that they have worked hard all their life. This iconic close-up photograph from Uganda showing the contrast between the hand of a malnourished boy and that of a missionary tells a powerful story about famine. Two examples of close-up photography. In each case the focus is on just one part of the horse’s face, which is isolated from the background.

Close-up vs. macro

Often we hear the word macro used in reference to—or even interchangeably with—close-up photography. But there is a key difference. A close-up is an image shot at close range, where the subject is isolated from its environment. Any camera and lens can shoot a close-up. A macro photograph, however, is an extreme close-up that portrays the subject as life-size or greater-than-life-size.

Macro photos are characterized by both closeness and magnification. If you wanted to photograph the details of an insect’s eyes, for example, you would take a macro photograph.

A macro photo is generally expressed as a ratio—a 1:1 ratio is when the image is life-size. To take a high-quality macro shot, you must use a special macro lens whose performance is specifically geared to close-focus shooting. A normal lens can’t focus when it’s very close to the subject and thus can’t take an image at a ratio greater than 1:1. A macro lens, however, can focus when positioned very close to the subject, allowing it to achieve greater-than-life-size magnification, a shallower depth of field, and thus clearer focus on tiny details.

Here is an example of a close-up photograph (above) and a macro photograph (below).

Equipment

If you’re aiming for high-quality macro shots, then consider investing in a dedicated macro lens. Almost all manufacturers of DSLR cameras offer a variety of lenses, including macros ranging from short (30mm to 60mm) to medium (60mm to 105mm) to tele macro (105mm to 200mm).

However, for regular close-ups, zoom lenses like a 55mm to 200mm or a 70mm to 300mm lens will work well. Even a fixed 50mm lens with an f/1.8 aperture can produce some nice close-ups.

Macro mode

Certain point-and-shoot cameras or DLSRs let you switch into macro mode simply by turning the dial to a macro setting (usually a tulip symbol). This allows you to focus at a very short distance from the subject. The quality of this macro setting, however, is very different from the quality you get when you use a dedicated macro lens. A camera’s macro setting will not shoot a subject so that it appears greater than life-size.

Focus and composition

For a great close-up, isolate your subject from its background by using a shallow depth of field (set the aperture to a low number) and/or picking a nondistracting background, if possible. Focus carefully and pick a specific focus point so your subject comes out looking sharp against a softer background. If you use a camera or lens with autofocus, make sure the lens is focusing on the object you want. Without a macro lens, you may have trouble focusing precisely, but you can remedy this by moving the camera a bit farther away from the subject. If you are using a zoom lens, then move back and zoom into your subject.

Focus is critical in close-up images. The point of focus in this image was meant to be the reflection in the eye (right) rather then the veins (left image).

Lighting and image stability

A common problem with close-ups is that if your light source is behind the camera, it will cast a shadow over the subject. Fix this problem by using a flash or other off-camera lighting. An off-camera flash will help avoid flattening the image and creating a shadow cast by the illumination. In the image on the left, a shadow is cast over the subject. In the image on the right, I took the strobe off the camera to eliminate the shadow from the camera. However, there is still an on-camera flash causing the slight shadow in the background. In the image on the left, a shadow is cast over the subject. In the image on the right, I have taken the strobe off the camera

to remove that shadow. A

n on-camera flash is causing the slight shadow in the background.

Keeping images sharp

Another common problem with close-up photography is image blur. The most common cause of image blur is the lens’s inability to focus at such a close proximity to the subject. To prevent that, first switch your camera to the macro setting (if it has one) and try again. If that fails, move the camera a little further away from the subject, or if you’re using a zoom lens, back up and zoom into the subject. Image blur can also be caused by slow shutter speed, low light, or a moving subject. To prevent this kind of blurring, set your camera up on a tripod or raise your shutter speed.

In the image on the left, the lens (a Nikon 50mm) was too close to the subject to focus automatically, thus creating a blurred image. In the photo on the right, I moved several inches away from the subject and was able to focus properly.

Close up photography is about capturing that small detail in a fleeting moment. Regardless of the kind of camera or lens you’re using, you can make your close-ups expressive and evocative.

What Is Grey Box Testing?

Introduction

Greybox testing is a software testing approach that involves evaluating a software program with just a limited understanding of its underlying workings. Because it includes access to internal coding to develop test cases as white box testing and testing methods are done at the functionality level as black-box testing, it is a hybrid of the two.

GreyBox testing is frequently used to identify context-specific problems in online applications. For example, if a tester discovers a flaw during testing, he makes code modifications to fix the problem and then retests it in real-time. It focuses on all levels of any complicated software system in order to enhance testing coverage. It enables the testing of both the display layer and the core code structure. It is typically employed in integration and penetration testing.

Gray Box Testing is a software testing approach that is a hybrid of White Box Testing and Black Box Testing.

Internal structure (code) is known in White Box testing.

The internal structure (code) of Black Box testing is unclear.

The internal structure (code) of Grey Box Testing is only partially website, the Grey box tester can make changes to the HTML code to validate the problem. In this case, white box testing is performed by modifying the code, and black-box testing is performed concurrently as the tester tests the changes at the front end. Grey box testing is produced by combining the White box with the Black box.

#2) Grey box testers with knowledge of and access to the error code database, which includes the cause for each error code, may analyse error codes and explore the cause in more depth. Assume the webpage receives an error code of “Internal server error 500,” and the reason for this issue is listed in the table as a server error. Using this information, a tester may further investigate the problem and same time in order to improve the overall quality of the product.

It shortens the time required for the lengthy process of functional and non-functional testing.

It offers the developer enough time to remedy any product flaws.

It incorporates the user’s point of view rather than the designer’s or tester’s.

It entails a thorough evaluation of requirements and specification determination from the user’s point of view.

Strategy for Gray Box Testing

It is not required for the tester to have access to the source code in order to do Gray box testing. A test is created using information about algorithms, architectures, internal states, and other high-level descriptions of program behavior.

Gray box testing can be done in a variety of chúng tôi employs a basic black box testing approach. It is based on the development of required test cases, and as a result, it establishes all of the criteria before the program is tested using the assertion technique.

Grey box testing techniques Matrix Testing

Grey Box testing is the term for this type of testing. It lists all of the variables that are utilized in a program. Variables are the components in every program that allows values to move through it. It should be tailored to the requirements; otherwise, the program’s readability and speed would suffer. The matrix approach is a method for removing unneeded and uninitialized variables from a program by detecting utilized variables.

Regression Testing

Regression testing is used to ensure that a change to one area of software does not have an unexpected or undesirable effect on another section of the product. Any defects discovered during confirmation testing were corrected, and that portion of the program began to function as planned; nevertheless, it is possible that the fixed flaw caused a new problem elsewhere in the software. Regression testing addresses these types of problems by employing testing techniques such as retesting hazardous use cases, retesting behind a firewall, retesting everything, and so on.

Orthogonal Array Testing or OAT

The goal of this testing is to cover as much code as possible with as few test cases as possible. The test cases are written in such a manner that they cover the most code as well as the most GUI functionalities with the fewest amount of test cases.

Pattern Testing

Pattern testing applies to software that is created by following the same pattern as prior software. The same kind of flaws is possible in this form of software. Pattern testing identifies the causes of failure so that they may be addressed in future software.

Greybox approach often uses automated software testing tools to carry out the testing procedure. Stubs and module drivers are supplied to a tester to alleviate the need for manual code development.

The following are the steps to do Grey box testing −

Step 1 − Make a list of all the inputs.

Step 2 − Determine the outcomes

Step 3 − Make a list of the key routes.

Step 4 − Determine the Subfunctions

Step 5 − Create subfunction inputs.

Step 6 − Develop Subfunction Outputs

Step 7 − Run the Subfunctions test case.

Step 8 − Check that the Subfunctions result is valid.

Step 9 − Repeat steps 4–8 for each additional Subfunction.

Step 10 − Carry on with steps 7 and 8 for the remaining Subfunctions.

GUI related, security related, database related, browser related, operational system related, and so on are examples of test cases for grey box testing.

Gray Box Testing’s Benefits

The software’s quality is improving.

This method focuses on the user’s perception.

Developers gain from grey box testing since they have more time to resolve bugs.

Grey box testing combines both black box and white box testing, giving you the best of both worlds.

Grey box testers don’t need to have extensive programming expertise in order to evaluate a product.

Integration testing benefits from this testing method.

This testing approach ensures that the developer and the tester are on the same page.

This approach may be used to test complex apps and situations.

This kind of testing is non-intrusive.

Gray Box Testing’s Drawbacks

Grey box testing does not allow for complete white box testing because a source cannot be accessed.

This testing approach makes it harder to link problems in a distributed system.

It is difficult to create test cases for grey box testing.

Access to code path traversal is likewise restricted as a result of limited access.

Gray Box Testing Difficulties

When a component under test fails in some way, the continuing operation may be terminated.

When a test runs completely but the substance of the result is wrong.

Summary

Grey box testing can minimize the overall cost of system faults and prevent them from spreading further.

Grey box testing is best suited for GUI, Functional Testing, security assessment, online applications, web services, and other similar applications.

Grey box Testing Methodologies −

Matrix Testing

Regression Testing

OAT or Orthogonal Array Testing

Pattern Testing

Frequently Asked Questions

Q #1) In software testing, what is grey box testing?

Answer − Grey box testing is used to eliminate any faults caused by difficulties with the application’s internal structure. This testing method combines Black box and White box testing techniques.

Q #2) Provide an example of grey box testing.

Answer − Both black box and white box testing are included in grey box testing. All of the specific documentation and requirements are available to the tester. For example, if a website’s link isn’t working, it may be examined and updated immediately in HTML and confirmed in real time.

Top 3 Most Effective A/B Testing Techniques For Landing Pages

Landing pages are the backbone of digital marketing. These one-page creations are responsible for most businesses’ sales conversions. The importance of a good landing page cannot be overstated — they turn visitors into customers. 

Before we jump into the three most effective A/B testing techniques, let’s discuss what constitutes a “well-made landing page.” The most obvious starting point is the headline. It is estimated that 80 percent of consumers won’t get past the headline, therefore, making this section of a landing page the most important. A good headline is both clear and enticing. It is not long in length, and it should make use of strong verbs. 

As we enter a new decade, landing pages are looking to become more interactive for the user, therefore, images and videos often are found on more and more landing pages. While it is true, that these elements can increase the probability that a visitor becomes a lead, it also can drain conversion rates. Poor imagery and videos can harm a businesses’ messaging, and/or bad design use of these elements can result in high bounce rates. Therefore, a well-made landing page is conscious of the true effectiveness of images and video use for their intended purpose. 

The submission form collects the prized information you are after. To increase your chances that visitors fill out this form, it needs to be above the page fold and ask for as little as information as possible. Visitors are less likely to submit their information as more is required. If a name and email address will suffice, don’t probe for more. 

Finally, to conclude what a well-made landing page consists of — attractive and easy to follow design must be a focal point. While design is often subjective, a landing page should be minimalist. Don’t overwhelm page visitors with too much information, or disjointed elements. In addition, limit the ability for visitors to navigate outside your page. 

A/B testing is all about making data-driven decisions, here are three techniques to help in your quest of finding the best version of your landing page. 

Tip #1 – Is Your USP Good Enough? 

Unique selling proposition (USP) is the overlapping of ‘what consumers want and what you do well.’ This or these key points denote why your business is the right solution for the consumer compared to competitors in the same market. This is the main bullet point a landing page tries to explain to the page visitor. 

It goes without saying, but is the messaging surrounding your USP working? 

Is it clear?

Exciting? 

Creative? 

You need to tinker and tweak in order to find the right words that make sure your USP jumps off the page.

Tip #2 – Test Duration 

A/B testing seems straightforward in concept can quickly become convoluted and misguided in results. One of the main reasons bad decisions are made is because of bad information. In A/B testing, the pages you are comparing need a fair sample size or period of time. Outliers and shortchanged results come into play when testing lacks a reliable sample size and/or is ended prematurely. 

Convert is a free tool that helps you estimate the right sample size and/or testing duration that will yield results you can trust. 

Tip #3 – You Need to Develop a Hypothesis 

You don’t take an organic chemistry exam without preparing beforehand. Yet, so many A/B tests are conducted without the necessary preparation. To ensure that your A/B testing is effective it is important to start with brainstorming and to-date analysis of prior landing page performance. 

Set out to answer these questions first: 

Why are you testing? 

What’s your proposed solution? (elements you are seeking to test)

What’s your assumption of how the solution would change user behavior?  

What result expected do you expect?  

By developing a hypothesis from the outset, you are setting yourself to succeed. Now the information you capture is easier to make inferences and decisions upon. A/B testing is a calculated science, not a couple of shots in the dark. 

Need A/B Testing Help? 

At Venta Marketing, we are helping businesses meet their goals head-on. As leaders in the digital marketing industry, we know the ins-and-outs of A/B strategy and technique. Contact us today if you are curious about our services! 

What can we build with you?

Name

First

Last

Email

Engineering

Marketing

Design

Hosting

Multimedia

8 Best 3D Printers For Printing Miniatures And Tabletop Models

With the development of new technologies, 3D printing has become accessible to everyone. There are various industries from medicine to engineering where 3D printers are becoming standard tools. But gamers also enjoy making their own miniature models for tabletop games such as DnD or Warhammer 40K. You can easily use 3D printers to make models of characters and monsters for your favorite games.

Miniatures are highly detailed representations of creatures and characters. Modern 3D printers can now produce minute detail such as a dragon’s scales or dents in armor. Here are the best 3D printers for printing miniatures and tabletop models.

Table of Contents

Resin Printers vs. Filament Printers

Not all 3D printers can produce the high level of detail needed for miniatures. Also, it’s important to understand the differences between resin and filament (FDM) printers.

Essentially there are two consumer-grade types of 3D printers. The first type is Fused Deposit Modeling or FDM printer. This type of printer melts the plastic and places it in layers to create a 3D object. The second type, stereolithography or SLA 3D printers, uses UV light to harden liquid resin to form the object.

The problem with filament printers is that melted plastic leaves visible layers and it is incapable of achieving great details, minus a few exceptions. Resin is overall better for miniatures and figurines because it can capture the tiniest details.

That said, FDM printers still have uses in the world of miniatures. They print much faster and the material is cheaper than resin. Filament printers are better for creating terrain and large models of buildings, mountains, castles, and many other items used in tabletop games.

Although resin printers are much better at creating miniatures, they have their own set of pitfalls. Resin is toxic, and you will need a well-ventilated space in which to work. You will also have to wear gloves and a mask as a precaution because ventilation itself is not enough. Besides this, SLA printers are much slower than the FDM ones, and your operating costs will be somewhat higher.

What to Look for in 3D Printers for Miniatures and Tabletop Models

The most important factor that determines the visual quality of the print is resolution. But the resolution of 3D printers is different from display resolution. It is measured in microns and tells you the smallest possible motion a print head can make while printing a layer.

There are two resolution measurements in 3D printers: XY plane and Z-axis. XY-plane resolution measures the dimensions of a 2D planar print or the layer height, while the Z-axis is the thickness of a printed layer. Quality miniatures for tabletop games demand a resolution of at least 50 microns, but the latest printers like the Saturn 2 can achieve a fine resolution of 28.5 microns.

The next thing you should look for in 3D printers is the build volume. It will tell you how big your miniatures can be. You need to think about what you wish to create in the future. Will you restrict yourself to miniatures only, or will you want to print larger models? 3D printers will require maintenance, and you should also research how easy to use they are. Choose one that suits all your needs as best as possible.

1. Anycubic Photon Mono 4K – Best for Beginners

This 3D printer uses an LCD screen to cast UV light on the resin. This makes the printing time very short, around 1 to 2 seconds for each layer. But Anycubic Photon Mono has a very limited build volume that will restrict the size of your miniatures and models. You won’t be able to print in heights above 6.5 inches (165mm). This is a good enough size for regular DnD miniatures, but figurines or buildings are out of the question.

2. Anycubic Photon Mono X – Best for Speed

This 3D printer from Anycubic is designed for large-scale prints. It is a resin printer with an LCD screen that is capable of achieving a printing speed of 60mm/hour. Besides the impressive speed, Mono X can print large-scale 3D models as it has a build volume of 192 x 120 x 245mm.

Anycubic Photon Mono X has an XY resolution of 50 microns, and a Z-axis resolution of 10 micrometers. This enables it to print fine details at a high speed. It is equipped with a touch screen user interface, and it can be controlled from your iPhone or Android device with the Anycubic app.

3. Phrozen Sonic Mini 4K – Best Mid-Range Printer

More expensive than the Anycubic Photon Mono, Phrozen Sonic Mini 4K offers similar specs. It can print miniatures up to 6.5 inches high and It also uses a monochrome LCD to cast UV light to harden the resin.

However, what makes Phrozen Sonic Mini 4K a better choice for printing miniatures is its XY resolution. It stands at 35 microns, which enables this 3D printer to create high details, even on small miniatures of only 28 mm. Its z-axis resolution is also impressive at 0.01mm.

4. Phrozen Sonic XL 4K – Best for High Precision

Phrozen Sonic XL 4K has a Dental Synergy slicer and software designed for ease of use. Other 3D resin printers on this list have a simple USB or Ethernet connectivity, but Sonic XL additionally provides you with microSD and Wi-Fi connection. This 3D printer is amazing in bringing out all the details in your miniatures, and it will bring your DnD or tabletop games to a new level.

5. Elegoo Mars 2 Pro – Best Budget Printer

Elegoo Mars 2 Pro can cost as low as $180 but its print quality is high. It is also impressively fast with printing a layer in 2 seconds. It has a mono LCD of only 2K resolution, but this is more than enough for a small build platform of 129×80 mm.

Mars 2 Pro can handle intricate details very well due to its XY resolution of 50 microns, and Z-axis accuracy of 0.00125mm. For such a small and low-cost 3D printing machine, Elegoo Mars 2 Pro has some luxurious features such as an in-built air system and touch screen controls. It also comes with a CHITUBOX slicer and tools that can hollow out the 3D models before printing. This will greatly save the printing material.

6. Elegoo Saturn 2 – Best for Going Big

If you want to print bigger miniatures of monsters or your favorite game characters, you have two options. Either use 3D printers with smaller build volumes and glue the pieces together or use Elegoo Saturn 2 to print them in one piece. The second option is always better as you won’t risk the glued parts falling off or warping over time.

Elegoo Saturn 2 has a fine XY resolution of 28 microns and a Z-axis resolution of 0.01mm. But its most impressive feature is its large print volume of 218.88×123.12×250 mm. You can use it to either print one big piece, or to print a batch of multiple miniatures in one go.

7. Creality Ender 3 V2 – Best for Printing Terrain

Creality is a producer of consumer 3D printers, and their Ender 3 Pro is a filament printer with a 220x220x250 mm build volume. It has many new built-in features that were lacking on previous Ender 3 models.

Among them are color LCD, silent mainboard, tool drawer, and many more. You can use Ender 3 V2 to print wargames miniatures, but you will have to sand them to get rid of the layer lines. But this 3D printer excels when it comes to printing terrain due to the high precision level and low price of the plastic.

The precision of Ender 3 V2 is high for an FMD printer. The resolution of both the XY plane and the Z-axis is 0.1mm. This is not an impressive number when compared to SLM printers designed for creating detailed miniatures, but it is a good value for a filament printer, especially of such a small size.

8. Raise3D E2 – Best for Variety of Materials

Raise3D E2 is a 3D filament printer with Independent Dual Extrusion (IDEX). It can print in a duplicate or mirror mode. It has a flexible heated printing bed which makes E2 an ideal machine for printing with different materials. That means that you can create miniatures and terrain not only from plastic but also from carbon fiber, glass fiber, metal, or wood fill. Impressive!

The E2 3D printer has a large build volume of 330x240x240mm, ideal for terrain and buildings for tabletop games. It has a video offset calibration system for the build plate and easy-to-use software. However, this is a professional 3D printer and its price reflects this. Aside from the high price, the only other downfall of this machine is that it requires thorough cleaning that will take a lot of time.

Update the detailed information about Global Model Interpretability Techniques For Black Box Models on the Daihoichemgio.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!