How to Model and Implement a Domain Specific Language (DSL) for Functional Test Automation

Abstract

 

Currently functional test automation is rarely applied to its full potential, therefore often delivering poor and inadequate results. Consequently functional test automation is often abandoned.

Test automation can play an important role in software development, significantly improving quality and allowing shorter development cycles and time to market. I propose an approach for efficiently automating functional tests with DSLs that are readable for domain experts (requirement owners) and encourage re-use. This leads to better software quality thanks to the knowledge input from the domain experts into the test suite. Re-usable building blocks lead to lower maintenance efforts for the test suite.

Introduction

Testing is essential to software quality. The success of service oriented companies heavily depends on the quality of their software. It is still not clear in what way to apply functional test automation in order to significantly improve software quality.

Here I will describe an approach for how to successfully implement functional test automation which in comparison to the traditional approach significantly reduces the cost of ownership of a test suite. The approach mitigates the risk of fast evolving software through the use of maintainable functional test automation.

This paper will identify and describe the problems of functional test automation and propose an approach for how to address the problems.

My contributions:

  • My approach significantly reduces the total cost of ownership of an automated functional test suite.
  • I have been experiencing in practice that domain experts are not involved in testing and, what is worse, domain specific tests are neither documented nor executed. I show how this approach involves the domain experts in the functional test automation.
  • I analysed existing approaches that could support DSL Modelling. I show how to use the existing knowledge and leverage it in order to successfully jump-start a test automation initiative.
  • I demonstrate how to implement this approach in Python.
  • I use an up-to-date example throughout the whole paper to show the utilization of the various aspects of my approach.

The Problem

Software experts agree that testing is essential to software quality. As with software engineering, strategies vary on how to do it. When arguing about the right testing strategy the discussion circles around the following contexts:

  • What level of application testing is appropriate? Should we focus on unit testing, functional testing, GUI testing, integration testing?
  • How do we determine if the testing is finished? How do we decide if the product can be shipped/ released?
  • How often do we test? Every release, every month, every week, every day, or even after each change to the source code?
  • What tools do we use?
  • What methodologies should we follow?
  • What amount of tests should we automate?
  • Who performs the testing? Who owns the testing discipline or process?

Test automation does not solve or even address all of the questions above but it should definitely be considered in your testing strategy. The biggest benefit of automated testing is that once the tests are implemented you can run them as often as your hardware allows at no additional cost. Also, automated tests are reproducible whereas manual tests, in many cases, are not. On the other hand, test automation itself is another expensive software development project. An automated test suite must be maintained to be effective, which is expensive, as well. For example GUI level test automation is typically created with the capture and replay method. If the GUI changes substantially the test cases need to be re-recorded. In an environment that evolves rapidly this can be hard to achieve. As an analogy to software-refactoring, in software development the test suite should be refactored to encourage reuse and the DRY (Don’t Repeat Yourself) pattern.

The following diagram shows the information flows in a typical software development process for service oriented industries such as telecommunications.

Information Flow within the Software Development Process

The diagram gives an overview of the information flows between the business units within a company. The technical business units provide services like software development, end-to-end-testing and systems operation. The business department understands the markets in which the company operates and defines product strategies. The people in the business department define the requirements for software development and are the domain experts. They work closely with analysts from the technical development unit who capture and analyse the requirements. Experienced analysts can also be seen as domain experts. Domain experts frequently do not define domain specific requirements because, by nature, these seem trivial or too intuitive to specify. An example for such implicit domain knowledge is what products are placed in what channels. Instead they focus entirely on specifying requirements for the computer system. The problem I want to point to is that testers may lack domain specific knowledge, which in turn impacts the test results. As a rule, testers do not get in touch with domain experts because they work for different business units within the company. The test analysts and testers base their test case documentation on analysis and development artefacts such as requirements, analysis and specifications. If requirements have been based on computer system requirements, as opposed to requirements based on domain knowledge, then these artefacts will not contain domain specific knowledge. This means domain specific tests are not performed. As a consequence this can only mean that you must involve the domain experts in test initiatives. Leaving the cultural issues out of the equation for a moment, this is still difficult because domain experts are often lacking application knowledge, test knowledge and technical background. To be successful in domain specific testing you must involve the domain experts and bridge the technological gap in order to capture the domain specific knowledge you need for the test cases.

Automated testing has a lot of potential and is very promising but currently is not delivering all possible results because it makes the maintenance and domain specific knowledge questions more difficult to answer.

If properly done an automated functional test suite can:

  • Reduce cost and time of regression testing.
  • Increase quality by freeing testers to focus on value added activities such as exploratory testing.
  • Increase quality by faster feedback to developers.
  • Capture and implement domain specific tests.
  • Make requirements explicit by giving developers executable acceptance tests.

Domain Specific Languages (DSL) to the rescue

Domain Specific Language (DSL), as opposed to a General Purpose Language (GPL), is a programming language tailored specifically to an application domain; rather than being for general purpose, it captures precisely the domain’s semantics. Popular examples for DSLs are HTML (used for document markup) and VHDL (used to describe electronic circuits). DSLs allow the concise description of an applications logic reducing the semantic distance between the problem and the program.

Testing tools and frameworks address a broad range of application domains (broad focus). This is natural because the tool vendor or developer wants to see his tool applied in a broad range of applications. For instance the Selenium test automation tool can be used to test a web shop application and the same tool can be used to test a social community web application. The tester or test automation developer is at the other extreme; he has a very narrow functional focus on the complexity of his application. He needs tooling that fits as well as possible in order to be able to succeed with the initiative and gain efficiency. He will probably start a new framework to exactly address his requirements. That is one of the reasons why there are so many frameworks available.

Besides writing a new framework there is another possibility to address the needs of the test automation developer. You can customize an existing framework so that it fits your needs. This can be done in two ways. The first and obvious way is to modify or extend the existing tool. Unfortunately this has its own problems, like the fact that the source code is not available for proprietary tools. Therefore updates, to new tool versions are problematic especially for customized proprietary solutions. The other alternative is to perform the customization within a layer on top of the existing tool. In this case you can totally shield away the complexity of the underlying testing tool. In the extreme, you can later if necessary exchange the underlying framework or tool you are using, without changing your test cases. In this variant the possibilities to extend functionality are limited. The positive aspects of the layering approach outweigh the negative therefore we will use this approach for this paper. The following diagram shows the different software layers within our testing approach.

Layers of the Test Automation Implementation

DSLs are by definition special purpose languages:

  • Concrete expressions of domain knowledge (captured in human readable form, not buried into system source code).
  • Direct involvement of the domain experts. The DSL format can often be designed in a way that matches the format typically used by domain experts. This results in keeping experts in a very tight software life cycle loop.

As explained in the last section, domain experts are often lacking technical background, but it is necessary to involve them in order to get domain specific testing right. The DSL you and I are going to build has its focus on the application domain and hides the complexity of the testing framework away. Since no testing framework knowledge is necessary this brings test automation in reach of application domain experts. With a little help they will be able to review and comment on test cases and draft outlines of new test cases soon. The goal of a functional testing DSL is to be as human readable as possible and to abstract the complexity of the underlying testing tools. The following example test case for the openstreetmap should give you an intuitive idea of what we want to achieve.

browse http://www.openstreetmap.org

search Von-Gumppenberg-Strasse, Schmiechen

pan right

zoom in

Another test automation problem we want to address with the functional testing DSL is the expensive maintenance of the test suites. The DSL commands enforce reuse and reduce code duplication. The decomposition of the test cases into commands makes it easier to identify where to apply changes and simplify the changes as well. The DSL will enable maintenance of the test suite without the necessity of re-recording test cases after each change. This will significantly reduce maintenance costs and the cost of ownership of the automated test suite.

DSL Modelling

The term DSL modelling is rarely used in literature. Most of the scientific papers on DSL use the terms DSL design and development. Most authors focus on the topic of implementing a DSL. They discuss differences, pros and cons of internal and external DSLs and describe implementation related patterns in great detail. Unfortunately the practice oriented reader who wants to use a DSL to solve a concrete problem is left alone when it comes to DSL modelling. In order to address the tasks necessary to model a specific application domain I will use the term DSL Modelling to distinguish it from the tasks necessary to implement a DSL, which I will address as DSL Implementation (the next section).

DSL modelling consists of the analysis of the problem domain, refinement and design, and the methods used to drive this process. A prerequisite for the design of a DSL is a detailed analysis and structuring of the application domain. GraphicalFeature Diagrams [3] are part of the FODA methodology and have been proposed to organize the dependencies between such features, and to document variabilities. In DSL modelling it is important to capture variabilities of your application domain because variability is the key factor in identifying complexity in your application domain.The following diagram shows such a feature diagram for the application domain of our example.

Feature Diagram for our Web Application Test example

The feature diagram helps us to document and drive the DSL modelling. Potential sources of features are:

  • Existing and potential stakeholders.
  • Domain experts and domain literature.
  • Existing systems and Documentation.
  • Existing analysis and design models (Use-case models, Object-models, etc.).
  • Models created during development (Entity-Relationship-Model, Class-Diagram, etc.).

For DSL modelling in the area of test automation we should use the following additional sources to look for features and vocabulary of domain specific knowledge:

  • Product descriptions targeted to the customer.
  • Any other sources that describe the behaviour of the system in the terms used by the domain expert (change requests, trouble tickets, bug reports).
  • Descriptions of manual test cases of the area to be automated.

Strategies to identify features from the Czarnecki book [1]:

  • Look for important domain terminology that implies variability, for example, checking account vs. savings account.
  • Examine domain concepts for different sources of variability, for example, different stakeholders, client programs, settings, contexts, environments, aspects, and so on. In other words, investigate what different sets of requirements mean for different domain aspects.
  • Use feature starter sets to start the analysis. A feature starter set consists of a set of aspects for modelling. Some combinations of aspects are more appropriate for a given domain than for another. For example, authentication, security, transactions, logging, etc.
  • Look for features at any point in the development. As mentioned before, we have high-level system requirement features, architectural features, subsystem and component features, and implementation features. Thus, we have to maintain and update feature models during the entire development cycle. We may identify all kinds of features by investigating variability in use case, analysis, design and implementation models.
  • Identify more features than you initially intend to implement. This is a useful strategy which allows us to create room for growth.

Steps in feature modelling from the Czarnecki book [1]:

  1. Record similarities between instances, that is, common features, for example, all accounts have an account number.
  2. Record differences between instances, that is, variability, for example, some accounts are checking accounts and others are savings accounts
  3. Organize features in feature diagrams. Organize them into hierarchies and classify them as mandatory, alternative, optional.
  4. Analyse feature combinations and interactions. We may find certain combinations to be invalid.
  5. Record all the additional information regarding features such as short semantic descriptions, rationales for each feature, stakeholders and client programs interested in each feature, examples of systems with a given feature. Document constraints, default dependency rules, availability sites, binding sites, binding modes, open/closed attributes, and priorities.

Start with steps 1 and 2 in the form of a brainstorming session by writing down as many features as you can. Then try to cluster them and organize them into feature hierarchies while identifying the kinds of variability involved (i.e. alternative, optional etc.). Finally, refine the feature diagrams by checking different combinations of the variable features, adding new features, and writing down additional constraints. Maintain and update the initial feature models during the rest of the development cycle. You may also start new diagrams at any point during the development.

The feature diagram is very helpful during domain analysis. The DSL modelling for our openstreetmap example is not yet finished. In general it is unclear how to proceed once a feature diagram exists [3]. I think in software engineering there is no “formula” which can be generically applied on how to implement a piece of software from a design model. Like in software engineering there are engineering skills necessary to transfer a feature diagram into a DSL. One thing that definitely helps to bridge this gap is the decision on how to implement the DSL in the first place. How to implement a DSL in Python is covered in the next section.

DSL Implementation in Python

Two common variants exist for DSL implementation. A DSL can be implemented as internal DSL or external DSL. The terms internal DSL and external DSL have been coined by Martin Fowler [8]. An external DSL is a “independent” programming language. The external DSL is implemented like a general purpose programming language. Some kind of parser tool or framework is used to interpret or compile the external DSL based on a grammar to the target platform.

On the other hand, a DSL can be implemented as an internal DSL. This is also known as the piggyback pattern. The internal DSL is written like a normal program in the target programming language. The syntactic features of the target language are used to make the program more human readable. The programmer uses indentation and naming of the methods and variables to make the program read like sentences in a natural language. All the existing infrastructure of the target language like a parser, interpreter or compiler are used. It is also possible to extend or limit the features of the target language if necessary. The effort to create an internal DSL is usually smaller compared to creating an external DSL. Sometimes it also comes in handy to have the underlying power of the target language at hand. On the other hand the syntax of the internal DSL is limited by the syntax of the target language.

Internal DSLs are often implemented by use of Method Chaining. With method chaining it is easy to implement a DSL even in system programming languages like C++ and Java. The following code fragment shows the implementation of a test case as internal DSL in Python.

from osm_dsl import Browser

Browser("http://www.openstreetmap.org/") /
    .click_view() /
    .search("Von-Gumppenberg-Strasse, Schmiechen") /
    .pan_right() /
    .zoom_in()

Compared to natural English language this is strictly formatted and still has a lot of parentheses. Note that the verification steps for the automated test case will be built into the DSL commands and will not be visible in the test case description.

from selenium import selenium
import unittest, time, re

class Browser(unittest.TestCase):
    def __init__(self, website):
        self.setUp(website)

    def zoom_in(self, amount):
        for i in range(amount):
            self.selenium.click("OpenLayers.Control /
                .PanZoomBar_6_zoomin_innerImage")
            time.sleep(1)
        return self

    def pan_right(self, amount):
        for i in range(amount):
            self.selenium.click("OpenLayers.Control /
                .PanZoomBar_6_panright_innerImage")
            time.sleep(1)
        return self
    ...

The above code sample shows the implementation of the commands “zoom_in” and “pan_right” for the internal DSL. In order to follow the method chaining pattern all methods in the example return a reference to the class instance.

The implementation of the functional testing DSL as an internal DSL is a valid option but in my opinion the resulting language is still not at all like natural language, and therefore not suitable for interaction with the domain experts. Alternatively the implementation as external DSL means additional effort for modelling and maintaining the grammar of the external DSL. It is clear that, when modelling a DSL for functional test automation, the language would change a couple of times in the beginning in order to adapt to the application domain, and to make it as human readable as possible. Changes to the DSL grammar would mean additional overhead when implementing an external DSL. I could not find a way to improve the readability of the internal DSL implementation in Python to an acceptable level. While looking for a way to improve the readability of the internal DSL, I had an idea for implementing an external DSL in such a way that the benefits of the internal DSL can be captured while having all the syntactic possibilities of an external DSL at hand. In fact the implementations of commands for the external DSL and internal DSL are almost identical in this approach. This means no additional effort for the implementation of the commands. The best thing is that during the start phase when you work with three or four commands, these can be modified without the need to change the grammar of the external DSL.

When automating test cases I always have a look at the manual test cases first. Manual test cases should be described in a way that a relatively inexperienced tester, who has read the application user guide, can execute the test case and verify the results. I think DSLs for test automation usually start small with about three to five implemented commands. These commands are used to formulate a couple of test cases. I have to adjust the commands a couple of times during this process until they perfectly fit my needs. The language grows step by step along the project. During maintenance of the test suite it will often be necessary to adjust the implementation of the commands due to changes in the GUI of the system under test.

The following sample shows a test case formulated in the external DSL. The test case is formulated in natural language which allows us to involve domain experts in the test automation project. The test case describes two different scenarios for location search. The first scenario is the search in the View tab which does not require the user to log in. The second scenario tests the location search in the Edit tab which requires a login.

Story Search Location uses osm_dsl

Scenario Search Location in View tab:
    browse http://www.openstreetmap.org
    search Von-Gumppenberg-Strasse, Schmiechen
    pan right
    zoom in

Scenario Search Location in Edit tab:
    browse http://www.openstreetmap.org
    click Edit
    login user mark identified by test1234
    search Von-Gumppenberg-Strasse, Schmiechen
    pan right
    zoom in
    logout

Next we will take a look at the Selenium tools. For the recording of the raw tests I use the Selenium IDE. The Selenium IDE is a Firefox plugin used to record clicks on a webpage, to add wait conditions, and verification steps. We export a captured test case as a Python script. We can use this script later as a basis for the implementation of the DSL commands. We use another tool, the Selenium Server, for execution of the test cases. The Selenium Server provides the test profile for the web browser, starts and ends the web browser, and handles all the communication of our test suite with the web browser. I cover the usage of Selenium in more detail in an article “Functional testing of web applications with Selenium” on my website [5].

The following table shows a Selenium test case with all the technical details like wait conditions and verification steps.

Selenium test case with all wait conditions

In Selenium IDE it is possible to export the recorded test case into a Python script which looks like the following:

from selenium import selenium
import unittest, time, re

class NewOsmTest(unittest.TestCase):
    def setUp(self):
        self.verificationErrors = []
        self.selenium = selenium("localhost", 4444, "*chrome",
            "http://www.openstreetmap.org/")
        self.selenium.start()

    def test_new_osm_test_case(self):
        sel = self.selenium
        sel.open("/")
        sel.type("query", "Von-Gumppenberg-Strasse, Schmiechen")
        sel.click("commit")
        for i in range(60):
            try:
                if sel.is_element_present("permalinkanchor"): break
            except: pass
            time.sleep(1)
        else: self.fail("time out")
        sel.click("OpenLayers.Control.PanZoomBar_6_panright_innerImage")
        sel.click("OpenLayers.Control.PanZoomBar_6_zoomin_innerImage")
        try: self.failUnless(sel.is_element_present("loginanchor"))
        except AssertionError, e: self.verificationErrors.append(str(e))
        sel.click("loginanchor")
        for i in range(60):
            try:
                if sel.is_element_present("//form[@action='/login']"):
                    break
            except: pass
            time.sleep(1)
        else: self.fail("time out")
        try: self.failUnless(sel.is_element_present("loginanchor"))
        except AssertionError, e: self.verificationErrors.append(str(e))
        try: self.failUnless(sel.is_element_present("user_email"))
        except AssertionError, e: self.verificationErrors.append(str(e))
        try: self.failUnless(sel.is_element_present("user_password"))
        except AssertionError, e: self.verificationErrors.append(str(e))
        sel.type("user_email", "markfink")
        sel.type("user_password", "test1234")
        try: self.failUnless(
            sel.is_element_present("//form[@action='/login']"))
        except AssertionError, e: self.verificationErrors.append(str(e))
        sel.click("commit")
        for i in range(60):
            try:
                if sel.is_element_present("link=markfink"): break
            except: pass
            time.sleep(1)
        else: self.fail("time out")
        try: self.failUnless(sel.is_element_present("link=markfink"))
        except AssertionError, e: self.verificationErrors.append(str(e))
        sel.click("editanchor")
        sel.click("logoutanchor")

    def tearDown(self):
        self.selenium.stop()
        self.assertEqual([], self.verificationErrors)

if __name__ == "__main__":
    unittest.main()

It is relatively easy to identify which parts of the script implement certain aspects (search, zoom, login, ...) of the test case. It also takes a little while to get used to reading the captured test cases. After a while you will slice the scripts into reusable commands in no time. Sometimes test cases that worked before, or which worked for another browser, will fail. This happens due to the asynchronous behavior of AJAX implementations. If this happens you must simply add another wait condition. It is better to add a waitForElement condition than a sleep() for a fixed time interval. Due to the fact that some browsers execute Javascript faster than others you must use waitFor conditions for synchronization to occur properly. After you slice the script into parts and form them into DSL commands, take some time to add additional verification steps. The DSL commands will be reused and you do not need to rework the verification steps of a command later if was done correctly the first time.

After some tweaking of the implementation of your external DSL commands would look something like that:

from selenium import selenium

@dsl('search (/w*)')
def search(query):
    sel.open("/")
    sel.type("query", query)
    sel.click("commit")
    for i in range(60):
        try:
            if sel.is_element_present("permalinkanchor"): break
        except: pass
        time.sleep(1)
    else: fail("time out")

@dsl('zoom in (/w*)')
def zoom_in(amount=1):
    for i in range(amount):
        sel.click("OpenLayers.Control.PanZoomBar_6_zoomin_innerImage")

@dsl('pan right (/w*)')
def pan_right(amount=1):
    for i in range(amount):
        sel.click("OpenLayers.Control.PanZoomBar_6_panright_innerImage")

@dsl('login user (/w+) identified by (/w+)')
def login(user_email, user_password):
    try: failUnless(sel.is_element_present("loginanchor"))
    except AssertionError, e: self.verificationErrors.append(str(e))
    sel.click("loginanchor")
    for i in range(60):
        try:
            if sel.is_element_present("//form[@action='/login']"): break
        except: pass
        time.sleep(1)
    else: fail("time out")
    try: failUnless(sel.is_element_present("loginanchor"))
    except AssertionError, e: verificationErrors.append(str(e))
    try: failUnless(sel.is_element_present("user_email"))
    except AssertionError, e: verificationErrors.append(str(e))
    try: failUnless(sel.is_element_present("user_password"))
    except AssertionError, e: verificationErrors.append(str(e))
    sel.type("user_email", user_email)
    sel.type("user_password", user_password)
    try: failUnless(sel.is_element_present("//form[@action='/login']"))
    except AssertionError, e: verificationErrors.append(str(e))
    sel.click("commit")
    for i in range(60):
        try:
            if sel.is_element_present("link="+user_email): break
        except: pass
        time.sleep(1)
    else: fail("time out")
    try: failUnless(sel.is_element_present("link="+user_email))
    except AssertionError, e: verificationErrors.append(str(e))
...

Note that the functions in the above source code block which form the DSL commands are modified by the @dsl decorator. The @dsl decorator links a regular expression statement to the function signature. This regular expression is used during execution of a test case scenario to identify which DSL command will be executed. The grammar an other infrastructure of the functional testing DSL is implemented as a Python nose unit test plugin (open source) [6].

I think this is really worth the effort. With a little bit of additional coding it is possible to break up the capture and replay tests into reusable DSL commands that can be easily reused to extend the test suite with more new test cases. The amount of work to fix timing issues and the time for writing additional verification steps pays back when you formulate new test cases based on these commands.

Now we have finished the implementation of our openstreetmap test case as an external DSL. The test case in this form is much more readable than the original Selenium test case (see the test case in the tabular form above). I would go so far as to say that domain experts, after a minimal amount of training, will be able to review test cases in this format. After a little practice, domain experts will probably be able to sketch new test cases that can than be automated incredibly quickly. Of course this depends heavily on the working environment. I would guess in a start-up-company environment domain experts would be motivated to support writing test cases in this way because of the high quality of the resulting software product.

Conclusions and further work

The only drawback of this approach is that special knowledge is necessary to model the DSL. It requires you to do some analysis of your application domain. A Feature Diagram can help you with this. Read the documentation on incidents, tickets, bug reports, or whatever they are named in your organization in order to learn the language of your business. Talk to business analysts and read the description of manual test cases - if they exist - in the area of your application domain. Start small and iterate. This means literally select just enough DSL commands to implement one test case. Implement another test case with the same commands and add additional commands as you go along. Do not hesitate to change the language to make it as human readable as possible. Collect early feedback from your domain experts. Avoid the capture and replay of test cases as much as possible. They are not easy to maintain and collecting too many of them will render your test suite useless. Instead break up the captured tests into reusable DSL commands.

If you know of a open source Python application that needs functional testing please let me know. I would be interested in executing functional test automation in a community project.


[1] Krzysztof Czarnecki, Ulrich W. Eisenecker, 2000. Generative Programming - Methods, Tools, and Applications. Addison-Wesley.

[2] Z. R. Dai, 2006. An Approach to Model-Driven Testing - Functional and Real-Time Testing with UML 2.0, U2TP and TTCN-3. Fraunhofer Publications

[3] Arie van Deursen, Paul Klint, 2001. Domain-Specific Language Design Requires Feature Descriptions. Journal of Computing and Information Technology.

[4] Eric Evans, 2004. Domain-Driven Design - Tackling Complexity in the Heart of Software. Addison-Wesley.

[5] Mark Fink, 2008. Functional testing of web applications with Selenium. http://www.testing-software.org/cast/Webapp-Testing/Selenium.html.

[6] Mark Fink, 2009. tdsl - A Functional Testing DSL. http://bitbucket.org/markfink/tdsl/.

[7] Fitnesse acceptance testing framework. http://fitnesse.org/.

[8] Martin Fowler, 2008. Method Chaining. http://www.martinfowler.com/dslwip/.

[9] Dan North, 2006. Introducing BDD, http://dannorth.net/introducing-bdd/.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
牙科就诊管理系统利用当下成熟完善的SSM框架,使用跨平台的可开发大型商业网站的Java语言,以及最受欢迎的RDBMS应用软件之一的Mysql数据库进行程序开发。实现了用户在线查看数据。管理员管理病例管理、字典管理、公告管理、药单管理、药品管理、药品收藏管理、药品评价管理、药品订单管理、牙医管理、牙医收藏管理、牙医评价管理、牙医挂号管理、用户管理、管理员管理等功能。牙科就诊管理系统的开发根据操作人员需要设计的界面简洁美观,在功能模块布局上跟同类型网站保持一致,程序在实现基本要求功能时,也为数据信息面临的安全问题提供了一些实用的解决方案。可以说该程序在帮助管理者高效率地处理工作事务的同时,也实现了数据信息的整体化,规范化与自动化。 管理员在后台主要管理病例管理、字典管理、公告管理、药单管理、药品管理、药品收藏管理、药品评价管理、药品订单管理、牙医管理、牙医收藏管理、牙医评价管理、牙医挂号管理、用户管理、管理员管理等。 牙医列表页面,此页面提供给管理员的功能有:查看牙医、新增牙医、修改牙医、删除牙医等。公告信息管理页面提供的功能操作有:新增公告,修改公告,删除公告操作。公告类型管理页面显示所有公告类型,在此页面既可以让管理员添加新的公告信息类型,也能对已有的公告类型信息执行编辑更新,失效的公告类型信息也能让管理员快速删除。药品管理页面,此页面提供给管理员的功能有:新增药品,修改药品,删除药品。药品类型管理页面,此页面提供给管理员的功能有:新增药品类型,修改药品类型,删除药品类型。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值