What is the case when testing software
Test management in software testing
Real Test management is in the Software test still no standard. Studies show that almost half of all software errors are only found in productive operation . This fact is not only costly, but also has a negative effect on customer loyalty and possible follow-up orders.
The reason for poor software quality can often be traced back to inadequate test management, software test know-how and QA budget . Many companies have still not managed to establish a continuous test process in their company. This leads to the fact that at the end of the project due to deadline pressure and the fear of possible penalties for the cancellation of planned tests. The fact that this approach is of course anything but beneficial for software quality does not require any further explanation.
Errors that are only found after go-live cost five times as much as errors that are discovered during development . Due to poor or non-existent test management, savings are made in exactly the wrong place.
Problems with insufficient software testing
© everythingpossible - stock.adobe.com
The lack of methodical test management in companies means that software tests are implemented unsystematically. For example, inexpensive unit tests are often not used or only sparingly and are no longer continuously maintained due to time restrictions in the course of the project. The same applies to test automation, which is neither continuously planned nor constantly implemented.
This means that the basic functionality of the software is checked using a few use cases at the end of the project. If errors occur during this check, these are often not standardized and passed on to the development team in an incomplete manner. This means that error analysis and correction take more time than is actually necessary. Due to the lack of time, the necessary regression tests are then dispensed with, which in turn increases the likelihood of overlooking new errors in older and previously tested modules.
All in all, the spontaneous organization of individual test measures, which take place separately from one another, consumes a lot of time, which, however, does little to improve the software quality. Well thought-out test management can help here.
What is test management?
Test management means that Coordination of all activities in the entire test process. The test process consists of many individual activities that can be grouped as follows. (According to ISTQB® ):
- Test planning and Test control
- Test analysis and test design
- Test implementation and execution
- Evaluation of end criteria and report
- Completion of testing activities
The test manager accompanies these tasks from the beginning to the end of the project. The points "Test planning and Test control" as well as "rating of end criteria and report“In conventional (non-agile) projects, the test manager is usually also operationally responsible.
What does test management include in detail?
Regardless of whether agile or conventional software development, you have to think about basic software test-relevant things. The following points must be included in the Test management defined or planned, or will be from Test manager In the course of the elaboration, other project employees were closely involved
- Requirements definition on a Test management tool. Tool selection a test management software / ALM software (test case management, requirements, etc.). Or decision as to whether a tool should be used at all.
- Test case management (Where and how? If necessary, with a requirement link?)
- Defect management (Where and how?)
- Lots Definitions, which are needed in the context of software development and software testing and where everyone involved should speak a common language must be determined:
- Defect classes, definition and evaluation of the defect severity
- Terms used in everyday project work by the testers, different depending on the company and project.
- Testing criteria / acceptance criteria / acceptance criteria
- Tip: In addition to the functional criteria, also think of the Not-functional criteria (performance, maintainability, reliability, usability).
- Test scope / Test coverage / Especially too Extent of the regression tests
- Tip: At least a rough plan has to be made of what and what roughly defined scope is to be tested. Defining test coverage is very difficult, especially in large corporate projects.
- Tip: Don't forget that too much can be tested. If too much time is "lost" in testing without finding important bugs, the scope of the test is too extensive.
- Test content / Test definition, Scope of the test objects and quality criteria
- Tip: Don't just consider functional tests: technical tests (non-functional tests), logging behavior, load and performance tests, resource consumption, unit tests (component tests), failure tests, tests at interfaces or GUI level, use of crowd testing, usability -Test, the latter more and more popular in the mobile app area.
- Test manager assists with test analysis (what is to be tested?) And test design (how is it to be tested?)
- priorities in the test, Risk assessmentSelect strict risk-oriented test procedures if necessary
- Tip: Testing is (or should always) be risk-based in some way. Because in the end it is important that business-critical things run smoothly. These must also be well tested.
- Test types and delimitation of the Test levels (also black box tests vs. white box test)
- Tip: Above all, it is also important which tests should run manually and which should run automatically.
- Test environment
- Tip: When is the environment available to whom? Who builds the environment and maintains it? Which and how many test environments are needed? What does a test environment include? Which interfaces, peripheral systems, mocks and which accesses are required?
- Test data
- Tip: Particularly technical - non-functional - tests such as load and performance tests often require a large amount of test data. You may have to think about production prints, mass test data generation, or anonymization. All of this is part of the test areadatenmanagement, which is part of test management, but can have its own role.
- resources or testPersonnel issues, Test team compilation
- All personal questions must be reconsidered: “When, who, where, what?” This also includes “who” can “what”? In other words, who should take on which "tester role" and which "test tasks".
- Test control, monitoring of test activities and collection of Trends
- Tip: Continuous control of whether everything is going well or problems need to be escalated. Check whether everything is going according to plan. How do the defects / issues (errors / deviations) develop from week to week or from milestone to milestone? How far is it to the next milestone? We can do it?
- Reporting, reports:
- Which reports should be sent when and to whom?
- What escalation levels and escalation paths are there?
- Tip: Reports and test results should be traceable and transparent.
- Test planning:
- Tip: start testing early (see also the paragraph "Testing in the software lifecycle"). For test planning, suitable metrics are to be selected with which one can monitor the progress of the test and make a statement as to whether one is “on schedule”.
- Defect processes, Defect management or defect management processes
- Tip: The workflow for the defect management processes should be as easy to understand as possible and tool-supported in a form so that no one who processes defects in their role (regardless of whether they are tester or developer) can provide instructions or a handout for defect management. Process needs. It should be obvious and self-explanatory.
- Tip: The control / monitoring of the defect management process and the status of the defects is also the responsibility of the test manager, insofar as the role has not been outsourced to other people.
- Effort estimates / test estimates
- Tip: The test manager does not normally make these estimates on his own, but by involving the testers and test automation specialists. Before doing this, of course, the expenses must first be fundamentally clarified.
- And of course active continuous improvement / test improvement
- Tip: Among other things through methodical Retro perspective Meetings with the testers initiated by the test manager.
The test concept
Depending on how the company and the IT project are set up, these points (see “What does test management include?”) Can then be put into a test manual, test strategy, test concept (etc.), depending on which development process is being maintained. Everything does not necessarily have to be consolidated in elaborate documentation, especially not in agile projects, but most of the points are also considered in the agile test.
The more “lean” or “agile” a software development project is, the smaller the documentation will be. Agile credo for documentation: “Just enough!”. However, thinking about test management topics is never out of place.
According to ISTQB, the test concept is ...
"A document that describes, among other things, the scope, the procedure, the resources and the timing of the intended tests with all activities. (…)"
Testing in the software lifecycle (application lifecycle)
Just about every textbook and curriculum that Software test Includes topics, also has a passage that says: Please start software test activities as early as possible!
Yet very few do it. Whereby this changes in agile methods or at least mixed up, since in agile methods it is more a part of the tester examining the stories during the planning phase. At least that's what makes it appear if you look at current practices in the agile world.
Regardless of whether you are agile or not, start all testing activities early enough. It is best to let testers participate in the project planning, software conception and especially the definition of requirements (or definition of stories), for example in the form of reviews.
Keyword: Testing over the entire application lifecycle!
The test process must not be viewed separately from the development process. Rather, the test process is closely interwoven with the development process and ideally runs at the same time as the development process. Synergy effects can only be used profitably if the test process accompanies the development process, which in turn leads to an increase in the quality of the product. The test process only ends with the successful acceptance of the software by the client.
Test managers and testers should therefore be involved right from the start in order to arrive at a high-quality and error-free software product. Testers and test managers help to uncover errors in concepts and to discuss questions about testability right from the start. Furthermore, testers and test managers start their test analysis & test drafts or work on the test concept.
Tool support throughout the software lifecycle (application lifecycle)
© everythingpossible - stock.adobe.com
In order to enable efficient work, not only the development team has to work with tools defined in advance and, in the best case, established in the company, but also the test team. In recent years, so-called ALM tools (Application Lifecycle Management Tools) have increasingly been used for test planning and administration.
In these tools, all software requirements can usually be mapped easily and transparently. The testers then have the opportunity to derive their test cases from this test basis. The test cases are in turn recorded in the ALM tool. Consistent linking of the test cases with the requirements ensures that no requirement is forgotten.
The test manager can then combine the individual test cases into different test runs and assign them to the respective testers, who then execute them. If errors occur during the execution of the individual test runs, these are recorded and prioritized in the tool in a standardized manner so that the development team can then take care of correcting the errors. After the errors have been corrected, the test manager can plan new test runs and assign them to his team members.
The ALM tools usually offer automatic evaluation of test runs as well as standardized test progress reports and test coverage metrics. This means that the entire team is always informed about the current status of the project and can counteract any problems or delays at an early stage.
Many companies use different tools for these tasks. Most of the more well-known tools on the market can also be connected to one another using interfaces. This means that you can access your ALM tool landscape with many individual tools.
The test automation is also supported by a test automation tool, more on this in the further course of the article.
If the tool chain consists of various applications, the complete tool landscape should be given special attention Test automation, Test case management, (possibly. Requirements), reporting / test status and Defects / deviations be networked with each other and display information from the other tools at all useful points, or exchange them via interfaces and synchronize with each other.
Standardization and guidelines in software testing
If you want to establish a test process that cannot only be used for one project, company-wide standardization must be used. For this purpose, internal company test manuals / test guidelines / test strategies (according to ISTQB: test guidelines) can be created, which serve as a template if specific test strategies and specific test concepts are designed in individual IT projects. These types of template test manuals are often also part of a higher-level software engineering manual (software development guideline) of the company.
The minimum standardization in software testing would be, for example, templates for defects, test cases and reports, which a team can create for a common denominator. However, company-wide standards are often developed in companies.
Furthermore, one can refer to standards in software testing in software testing and software testing documentation as they are after ISTQB, IEEE, ISO / IEC can be found.
A relatively new standard in Software test is the ISO / IEC / IEEE 29119 software testing
This includes :
- 29119-1: Concepts & Definitions
- 29119-2: Test Processes
- 29119-3: Test Documentation
- 29119-4: Test Techniques
- 29119-5: Keyword-Driven Testing
The uniform approach and the use of best practices reduce the time required on the one hand and improve teamwork on the other. Efficient work can only take place if it is clear to everyone what to do and how to do it. Furthermore, people can change teams more quickly, with company-wide standards even beyond IT projects.
Instead of fine-grained details, the test guideline can also contain at least the use of specific procedures or standards (e.g. ISTQB) or tools (e.g. specific test automation tools or ALM tools).
Another important point is the use of a uniform language throughout the team. For example, a mandatory ISTQB certification for all team members can help get the team on track and avoid unnecessary discussions about terminology.
Another advantage of company-wide standardization is the easier evaluation of key figures and the possibility of automatic generation of necessary reports.
A certain framework is of course required for standardization in test management or in the test process. With care and good greening, one should always be able to counter a company-wide test standard and prefer an individual solution.
Outsourcing of test processes
Outsourcing parts of the test process or the entire test process may or may not make sense depending on the company and project. In practice, sub-areas such as test automation are often outsourced to external QA consulting firms or external IT service providers, as they (hopefully ☺) already have a lot of know-how in this area or may even be more cost-effective than their own internal test department .
Before deciding to use outsourcing, however, an honest cost-benefit analysis and a realistic schedule should always be drawn up. It is not uncommon for these projects, if they have been poorly planned, not only to get out of hand in terms of costs. In particular, the additional communication effort and the checking of the final results are often underestimated in the initial planning of an outsourcing project.
Depending on the outsourcing model, it can be about test automation, manual tests, certain tests (load test) or even the major part of the test process (integration tests, system tests, acceptance tests).From near-shore to off-shore, the best choice is to be made depending on the project, situation and company.
At the start of an outsourcing project, it must not only be determined which model (nearshoring or offshoring) is to be chosen and what results are expected, but also who is responsible for communication with the third-party provider and what measures can be taken in the event of problems.
Test planning and test control
Without conscientious and thorough test planning, especially large software projects can be doomed to failure right from the start. In order to guarantee a high software quality and a timely completion of the software, one therefore needs reliable test planning.
The test planning does not only consist of a simple resource planning, but also includes the creation of a test concept in which all essential test objects, the test entry and test exit criteria, the test termination conditions and also the test methods used and the test procedure are recorded. In addition, the necessary results and documents must be defined at the beginning of the project, which must be delivered at the end of the project.
The test control also includes the monitoring of test activities and rapid intervention in the event of deficiencies.
Test analysis and test design
In this phase the details of the individual test objects are considered. Only here you can see how complex the project is in reality and which test techniques and tools are used. A precise analysis of the requirements makes it clear, for example, whether performance tests are absolutely necessary for the project in the first phase or how many tests can be automated.
The intention of this phase is to translate all test objectives into concrete test conditions and test cases so that a finished test design specification is available at the end of this phase.
Test levels and test techniques
In general, there are different test techniques available to the test team for different test levels. The test levels can usually be divided into the following sub-disciplines:
- Unit tests, module tests, unit tests
- Integration test, interface tests
- System tests
- Acceptance test
Different test techniques such as black box tests or white box tests can then be used in these test levels, although unit tests do not always fall within the responsibility of the test team.
The test manager usually determines to what extent and which technology is used. In the case of points such as unit tests, consultation with development management or project management is required.
In software projects in aerospace engineering or in the medical field, for example, legal requirements must be observed that make the use of pure black box tests impossible.
© jim - stock.adobe.com
In the course of the test analysis, it must also be decided which tests are suitable for test automation and which tests should be carried out manually. Depending on the project and time, the best possible cost-benefit ratio must be created. In particular, routine activities and modules that already have a certain stability can be automated very easily, provided that the test team has enough know-how and the team has enough time.
A common mistake here is unrealistic management guidelines. It makes little sense to define in advance that, for example, 70% to 80% of the tests have to run automatically if the testers only get 20% to 30% of the developer time in return. In these cases it is the task of the test manager to give the project management a realistic assessment of how many automated tests are possible under the given conditions and which modules, for example, will only be automated after the first release.
A list of test automation software can be found here and a detailed explanation of test automation can be found here.
Test data and test environment
To be successful, practical To be able to carry out tests, such as those required for load and performance tests, the testers need a test environment as early as possible that corresponds to the later productive environment. In addition, the QA engineers must be able to access a consistent and realistic database. It is of little help here, for example, to simulate a data warehouse, which works with millions of data records in real operation, with only a few hundred test data records during the test. Nor does it make sense to check applications that are later used by hundreds of users at the same time in the performance test with only 2 or 3 users.
Here, the sizing of the test environments plays a decisive role and this is exactly where the connection points for the subsequent configuration management and release management are. It is the test manager's task to coordinate these overlaps and to provide his test team with the optimal environment (depending on the purpose). This also includes all necessary peripheral systems and any mocks.
In addition to the test environment, the appropriate test data also play an important role. Depending on the intended use, the appropriate test data must be available. Practice-comparable load tests or fail-over tests under load conditions usually require a lot of test data or even a full practice deduction.
There are many different ways to get the appropriate test data. The key words are Anonymization, pseudo-anonymization, synthetic tests, test data generators, test data management tools, data protection and security.
The planning, generation and coordination of the provision of the test data is part of the testDatamanagement, which often does not have a role (or person) of its own in IT projects, but is carried out by the testers and developers and planned by the test manager. Especially in very large companies and very large IT projects, fixed roles and fixed persons who deal centrally with the topic of test data can be of advantage. Especially if the point privacy In addition, it requires a certain amount of specialization and expertise in this topic.
Test implementation and execution
If all test entry criteria have been met, the test implementation phase can be started. In this phase, all test cases and use cases are executed and documented on the basis of the test design specification.
The test manager monitors the progress of the test and, if serious problems occur, makes the decision to abort the test. In addition, he ensures that all test results are clearly documented in accordance with the previously defined standards and that all errors found appear in the ALM tool or error management tool used.
Test evaluation and report
After successful completion of the test, a final test report is created. This report contains a summary of the tests and test activities as well as a list of all deviations found. Each test object is assessed individually in this report and a recommendation is made regarding the approval of the respective test object. A test coverage matrix is often provided in addition to the report itself. This matrix shows which requirements were covered by which test cases.
Completion of testing activities
After the software has been released, the test activities can be completed. This final work includes, for example, the archiving of the test environment and test data, as well as the creation of final reports and documents. Finally, there is an evaluation session in which best practices for the next projects should also be worked out.
Test manager tasks
One activity of test managers is to create a Test strategy and test concept, which in the best case scenario is bindingly signed by all stakeholders. The test manager is also targeted for the "Test planning and Test control"And for the"Reporting and the flow of information“Responsible and at best takes part in all relevant project meetings. In these, he presents the test progress of his team as well as the previously defined reports and metrics.
The test manager is the most important link between the testers and all stakeholders who are interested in the test and the test results. The test manager represents one in meetings High software quality point of view. The test manager should always strive for a good relationship with all stakeholders. The test manager must have the Stakeholders and their test requirements as well as theirs criticism on the test procedure understand. Of course, the test manager communicates the Test status (Incl. Defect status) or answers questions from stakeholders about this if necessary. The test manager always has to maintain a professional and objective impression.
The test manager must understand tests in his test team and be able to provide information about the purpose, test coverage and other queries that arise from stakeholders. For this reason, he must have a rough overview of the overall conditions of the test team, but also have detailed knowledge of the tests. Since the test manager has at best provided at least conceptual and advisory support with test analysis / test design and should have planned the entire test process, this detailed knowledge is usually available.
© MK-Photo - stock.adobe.com
In addition to these functions, the test manager takes care of the Effort and resource planning and designed that Test procedure as well as the implementation of the tests. He defines which Test environment is used and which testing tools, templates and standards are used.
The test manager leads the test team. He also tries to create a good and productive atmosphere in the team and is the first point of contact for team members if problems or questions arise. This also places high demands on social skills, conflict resolution and a very good leadership role for the test manager. Usually the test manager also builds the test team.
In the best case, that serves Test manager as a role model for his test team. At best, he is in a position to do this, along with his testers Coaching, in test techniques and, for example, test case design procedures.
Finally designed, coordinated or the test manager processes (operationally) all points listed under "What does test management include?"
Are test managers always necessary?
Test managers are usually not deployed until the test team has reached a certain size. In practice, a team size of 3 to 7 testers has established itself. Test managers not only lead their team during the project, but also act as a central interface between project management, the development team and the test team. If there is no test manager, the test team implicitly takes on the test management tasks, since most of the issues from this must be coordinated and planned.
The common problems in test management
"Unfortunately" the most common problems in test management are of a very succinct nature:
- Too little time for the test, as software test activities mostly enjoy too low a prioritization in IT projects.
- Test activities were started too late in the IT project.
- There is not enough budget left for the test or for the planned scope of the test, which would be good for the IT project.
- Problems relating to test automation (difficult to maintain, too expensive, too lengthy, too slow, does not run at all, does not run constantly, poor or missing coverage)
- Differences between test team and development team or other stakeholders.
Test management in agile software projects
In agile software development, many classic test management methods are built in (or at least should be). As a result, test managers tend to have an advisory role in these development models, or “test manager” is more of an imaginary role that is carried out by one (or more) person from the agile team. The agile team also thinks about many points from “What does test management include in detail?”, But more in a team and through short agreements.
Test managers can continue to be the point of contact for the test tools and test methods used, as well as for test training opportunities and the workload. In addition, their task is to establish a test culture in the team (or beyond the team) and to create a healthy attitude towards high software quality. Most agile methods involve basic QA measures, for example stories and their acceptance criteria should be testable and test coverage (at least) in the form of unit tests is usually included.
Establishing and complying with the Definition of Done (DOD) is also usually associated with QA activities or test activities. Because even here the team usually defines a test coverage, for example for unit tests and interface tests, compliance with coding rules or other QA measures. If necessary, a test-driven procedure is even used (TDD, BDD).
In previous models such as SCRUM, the role of the test manager does not exist directly. But the role of the tester does not exist in SCRUM either. Nevertheless, there are of course "test activities" in SCRUM and the thoughts of the test manager (or the scope of the test management) are also present in the SCRUM team, but rather distributed throughout the team and rather implicit.
How a company integrates its quality assurance in the SCRUM team is very different. Often, depending on the person in the team, this also differs greatly from team to team within the company. This should also be the case, because a basic idea of the Agile Manifesto is “First people, then processes”. In other words, in the agile team you first look at what the team can do or who has what knowledge and divide your work so that it brings the best success including the highest quality. That’s the theory.
The main responsibility in the SCRUM team lies with the product owner, who is responsible for the success of the product. Most of the facets of the test manager or test management can often be found here in SCRUM teams. Because the product owner should ensure that the team displays a high level of quality awareness and that every member of the development team feels responsible for the quality of the product from the start. Thanks to the agile approach, synergy effects can of course be used.
In SCRUM teams, however, there is often a separation between “developer” and “tester”, even if SCRUM does not specify it. In SCRUM, “testers” are often “test automators” because test automation is mostly used in agile projects due to the short cycles involved. At the unit test level anyway, but also at the interfaces and GUI level. Continuous integrations or continuous delivery has also established itself as a standard in the agile sector. This means that basic tests run immediately after code check-in and more complex tests should run every night.
But here, too, all the thoughts from test management (test environment? Test what? Automate what and what not? Who does what?) Do not go unanswered, but are answered by the team. Or with “commitment” from the team. If there is a person from the test area or test manager in the agile team, it naturally makes sense to entrust this person with questions and tasks from the area of test and test management.
In this way, developers can support the tester in the creation and integration of automated tests, while the tester himself can draw the developers' attention to critical areas in the test objects at an early stage and carry out reviews. In addition, the team is able to react flexibly to changes, as they are not tied to a rigid test plan.
Excursus: Test management as support for the contractual situation for commissioned software
It is not uncommon for the acceptance criteria for a commissioned software to be known before the contract is signed. Even if not yet fully defined, there is certainly a certain idea of many acceptance criteria in the minds of the stakeholders upon detailed inquiry.
To put a stop to later disputes, the early creation of acceptance criteria therefore also make sense to determine the contractual position for a software order.
By anchoring the acceptance criteria in the contract, the project can be processed and accepted quickly, without unnecessary discussion about whether a functionality desired by the customer is a change request or a requirement.
The use of specific acceptance criteria that can also be used for the test has the additional advantage that it can simplify the creation of a test concept. Scheduling and milestones, which would also be mentioned in the test concept, can also migrate from the definitions in the contract directly into the concepts.
With this construct it is important that the test manager or the test team is involved in the development process right from the start. Because they have to let their expertise flow directly into what can be tested and how it can be tested later. This construct has advantages in and of itself, as almost every textbook teaches us to take the test as early as possible in the IT project.
As the agile world has shown us in the last few decades (or would like to lead us to believe), the early and (too) detailed clarification of acceptance criteria is an undertaking that also entails risks and major hurdles, as the views of stakeholders also change can.
Because even the stakeholders only get a real feeling for the software they have delivered during the software development process. Most of the time, the stakeholders only learn what their software really needs over time. In this case, the contractual acceptance criteria must be adjusted by mutual agreement.
Successful software testing requires not only time and personnel, but also end-to-end test management. Without the integration of the test management process in the project flow, delays and budget overruns in the project will occur again and again. In the worst case, the client can even refuse to accept the software.
Set quality standards can only be adhered to through a systematic approach and the consistent implementation of all relevant topics. This can only succeed if the test process is established in the company itself and is adequately managed and if the responsible parties are provided with sufficient resources.
Your experiences, questions, feedback?
Do you have any questions about the topic Test management or Agile testing? Or would you like to add something from your test management know-how? Gladly feedback, ideas and questions in the comments further down.
- ISTQB Gloassar: http://glossar.german-testing-board.info (accessed on 07.02.2017)
- Heise News: https://www.heise.de/newsticker/meldung/Bericht-Viele-IT-Projekte-scheitern-an-unzureichenden-Tests-1233377.html (accessed on 07.02.2017)
- Heise News: https://www.heise.de/resale/artikel/Fuenf-goldene-Regeln-fuer-ervielreiches-Testmanagement-1247450.html (accessed on 07.02.2017)
- Wikipedia: https://en.wikipedia.org/wiki/ISO/IEC_29119 (accessed 02/07/2017)
- ISTQB CTFL TM: http://www.german-testing-board.info/wp-content/uploads/2016/07/CTAL_Lehrplan2012_TM_Final_Germ_V100.pdf (accessed on 07.02.2017)
- Dandruff causes hair loss
- Is chest hair normal in women
- Is nursing a social science
- It is necessary baptism in Christianity
- What happens to the Chinese nationalist
- Is domestic violence ever justified?
- What's the best marriage joke ever
- What are the cons of RTGS
- Are jeans and denim the same thing?
- What do minorities think of Trump?
- Why do doctors have such difficulty typing
- What causes wage stagnation
- PayUmoney accepts international payments
- What's the latest fad among US girls
- How can you recognize an ambivalent guy
- What are the famous marketplaces in Hospet
- What are the latest technologies in CSE
- How diverse is Houston
- What is inorganic chemistry
- What is inorganic chemistry
- Which university is the MIT of India
- How are IITians treated abroad
- Why did Macedonia conquer Greece
- How do you experience clear dreams